url
stringlengths
66
66
text
stringlengths
294
30.1k
num_labels
sequence
arr_labels
sequence
labels
sequence
https://api.github.com/repos/huggingface/transformers/issues/22848
TITLE Add LLaVA model COMMENTS 11 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### Model description [LLaVA](https://llava-vl.github.io/) is a multimodal model that combines a vision encoder and Vicuna for general-purpose visual and language understanding, "achieving impressive chat capabilities mimicking spirits of the multimodal GPT-4". ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation https://github.com/haotian-liu/LLaVA
[ 20 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ "New model" ]
https://api.github.com/repos/huggingface/transformers/issues/22188
TITLE XGLMForCausalLM does not support `device_map='auto'` for load 8 bit COMMENTS 4 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info transformers: v4.27.0 ### Who can help? @sgugger @muellerzr ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I was use this code. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "facebook/xglm-1.7B" tokenizer = AutoTokenizer.from_pretrained(model_name) model_8bit = AutoModelForCausalLM.from_pretrained(model_name, load_in_8bit=True, device_map='auto') ``` Error: ```python Overriding torch_dtype=None with `torch_dtype=torch.float16` due to requirements of `bitsandbytes` to enable model loading in mixed int8. Either pass torch_dtype=torch.float16 or don't pass this argument at all to remove this warning. --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[5], line 3 1 model_name = "facebook/xglm-1.7B" 2 tokenizer = AutoTokenizer.from_pretrained(model_name) ----> 3 model_8bit = AutoModelForCausalLM.from_pretrained(model_name, load_in_8bit=True, device_map='auto') File /usr/local/lib/python3.8/dist-packages/transformers/models/auto/auto_factory.py:471, in _BaseAutoModelClass.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 469 elif type(config) in cls._model_mapping.keys(): 470 model_class = _get_model_class(config, cls._model_mapping) --> 471 return model_class.from_pretrained( 472 pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, **kwargs 473 ) 474 raise ValueError( 475 f"Unrecognized configuration class {config.__class__} for this kind of AutoModel: {cls.__name__}.\n" 476 f"Model type should be one of {', '.join(c.__name__ for c in cls._model_mapping.keys())}." 477 ) File /usr/local/lib/python3.8/dist-packages/transformers/modeling_utils.py:2556, in PreTrainedModel.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 2550 special_dtypes = { 2551 name: torch.float32 2552 for name, _ in model.named_parameters() 2553 if any(m in name for m in keep_in_fp32_modules) 2554 } 2555 if model._no_split_modules is None: -> 2556 raise ValueError(f"{model.__class__.__name__} does not support `device_map='{device_map}'` yet.") 2557 no_split_modules = model._no_split_modules 2558 if device_map not in ["auto", "balanced", "balanced_low_0", "sequential"]: ValueError: XGLMForCausalLM does not support `device_map='auto'` yet. ``` ### Expected behavior XGLMForCausalLM should support `device_map='auto'`.
[ 19 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Feature request" ]
https://api.github.com/repos/huggingface/transformers/issues/24734
TITLE bug: eval_accumulation_steps can lead to incorrect metrics COMMENTS 3 REACTIONS +1: 1 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info - `transformers` version: 4.31.0.dev0 - Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.17 - Python version: 3.8.16 - Huggingface_hub version: 0.14.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): 2.11.1 (True) - Flax version (CPU?/GPU?/TPU?): 0.5.3 (cpu) - Jax version: 0.3.6 - JaxLib version: 0.3.5 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? Hey @sgugger, I'm tagging you since this has to do with the trainer. ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Using the `run_qa.py` script in the `examples/pytorch/question-answering/` folder ```bash python run_qa.py \ --model_name_or_path "sjrhuschlee/flan-t5-base-squad2" \ --dataset_name squad_v2 \ --output_dir "tmp/eval_squad_v2/" \ --version_2_with_negative True \ --max_seq_length 512 \ --doc_stride 128 \ --do_eval \ --per_device_eval_batch_size 24 \ --tf32 True \ --dataloader_num_workers 6 \ --preprocessing_num_workers 6 \ --bf16_full_eval \ --eval_accumulation_steps 2 \ --overwrite_output_dir False ``` I found that the calculated metrics when using `eval_accumulation_steps` is not always correct. When not using `eval_accumulation_steps` with the above script I find that I get the expected metrics. However, I found that I needed to use `eval_accumulation_steps` for evaluation of the `flan-t5` models with the above parameters on my system otherwise the memory usage on the GPU would fluctuate from 4 - 8GB which could cause an OOM. I believe I found the cause for the inconsistency in the metrics. Specifically this line https://github.com/huggingface/transformers/blob/a074a5d34d6411fb00e83a2ed30acf23d8c976b5/src/transformers/trainer.py#L3150 does not cover the edge case where the total number of batches in the evaluation is not exactly divisible by `eval_accumulation_steps`. For example, if `eval_accumulation_steps = 2` and the total number of batches is 613, then only the last batch is used when calculating `all_preds`. I was able to partially fix this problem by adding a new variable called `total_steps` and updating the if statement ```python logger.info(f"***** Running {description} *****") if has_length(dataloader): total_steps = len(dataloader) logger.info(f" Num examples = {self.num_examples(dataloader)}") else: total_steps = None logger.info(" Num examples: Unknown") ... if args.eval_accumulation_steps is not None and ( (step + 1) % args.eval_accumulation_steps == 0 or (step + 1) == total_steps ): ``` However, this will still be a problem for dataloaders that don't have a defined length. ### Expected behavior Using `eval_accumulation_steps` should work in every case even when the number of batches is not divisible by `eval_accumulation_steps`.
[ 25 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0 ]
[ "solved" ]
https://api.github.com/repos/huggingface/transformers/issues/23655
TITLE Add EnCodec model COMMENTS 5 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY # What does this PR do? Adds the EnCodec neural codec from the [High Fidelity Neural Audio Compression](https://arxiv.org/abs/2210.13438) paper. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
[ 20 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ "New model" ]
https://api.github.com/repos/huggingface/transformers/issues/22106
TITLE [i18n-<languageCode>] Translating docs to <languageName> COMMENTS 0 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY <!-- Note: Please search to see if an issue already exists for the language you are trying to translate. --> Hi! Let's bring the documentation to all the <languageName>-speaking community 🌐 (currently 0 out of 267 complete) Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list. Some notes: * Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗). * Please translate in a gender-neutral way. * Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source). * Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml). * Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @ArthurZucker, @sgugger for review. * 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/). ## Get Started section - [ ] [index.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.mdx) https://github.com/huggingface/transformers/pull/20180 - [ ] [quicktour.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.mdx) (waiting for initial PR to go through) - [ ] [installation.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.mdx). ## Tutorial section - [ ] [pipeline_tutorial.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.mdx) - [ ] [autoclass_tutorial.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.mdx) - [ ] [preprocessing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.mdx) - [ ] [training.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.mdx) - [ ] [accelerate.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.mdx) - [ ] [model_sharing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.mdx) - [ ] [multilingual.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.mdx) <!-- Keep on adding more as you go 🔥 -->
[ 8 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "WIP" ]
https://api.github.com/repos/huggingface/transformers/issues/24602
TITLE Support gradient checkpointing for ESM models COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY Would you please add `gradient_checkpointing_enable()` feature for ESM models? These models currently are the best available pre-trained protein language models for researchers. Many thanks.
[ 19 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Feature request" ]
https://api.github.com/repos/huggingface/transformers/issues/22570
TITLE Add MobileViT v2 COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### Model description [MobileViT](https://openreview.net/forum?id=vh-0sUt8HlG) is a computer vision model that combines CNNs with transformers that has already been added to Transformers. [MobileViT v2](https://arxiv.org/abs/2206.02680) is the second version; it is constructed by replacing multi-headed self-attention in MobileViT v1 with the proposed separable self-attention. Does Hugging Face have plan to add MobileViT v2 to Transformers? ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation The official implementation is from Apple at this link: [https://github.com/apple/ml-cvnets](https://github.com/apple/ml-cvnets) The timm library also implemented it and has pre-trained weights at this link: [https://github.com/huggingface/pytorch-image-models/blob/82cb47bcf360e1974c00c35c2aa9e242e6b5b565/timm/models/mobilevit.py](https://github.com/huggingface/pytorch-image-models/blob/82cb47bcf360e1974c00c35c2aa9e242e6b5b565/timm/models/mobilevit.py)
[ 20 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ "New model" ]
https://api.github.com/repos/huggingface/transformers/issues/23407
TITLE Fix translation no_trainer COMMENTS 3 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY # What does this PR do? This PR fixes the reason translation has been failing, by adding in the same `num_beams` that were found to be used in the test. Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger, cc @ydshieh
[ 0 ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Examples" ]
https://api.github.com/repos/huggingface/transformers/issues/22867
TITLE `push_to_hub` with `branch` or `revision` keyword argument COMMENTS 2 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### Feature request In `datasets`, you can upload a dataset to a `branch`. In the `transformers` package, it doesn't seem like `branch` or `revision` [are supported](https://huggingface.co/docs/transformers/v4.28.1/en/main_classes/model#transformers.PreTrainedModel.push_to_hub) ### Motivation To push a model to the hub and with a revision seems a little harder. It seems like I would need to find the cache directory of the model and use `upload_folder` from `huggingface_hub` to upload to the correct revision. I could very well be missing the right documentation but I can't seem to figure out how/where to do this ### Your contribution Maybe a PR?
[ 19 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Feature request" ]
https://api.github.com/repos/huggingface/transformers/issues/24216
TITLE Bump transformers from 3.5.1 to 4.30.0 in /examples/research_projects/pplm COMMENTS 3 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY Bumps [transformers](https://github.com/huggingface/transformers) from 3.5.1 to 4.30.0. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/huggingface/transformers/releases">transformers's releases</a>.</em></p> <blockquote> <h2>v4.30.0: 100k, Agents improvements, Safetensors core dependency, Swiftformer, Autoformer, MobileViTv2, timm-as-a-backbone</h2> <h2>100k</h2> <p>Transformers has just reached 100k stars on GitHub, and to celebrate we wanted to highlight 100 projects in the vicinity of <code>transformers</code> and we have decided to create an <a href="https://github.com/huggingface/transformers/blob/main/awesome-transformers.md">awesome-transformers</a> page to do just that.</p> <p>We accept PRs to add projects to the list!</p> <ul> <li>Top 100 by <a href="https://github.com/LysandreJik"><code>@​LysandreJik</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/22912">#22912</a></li> <li>Add LlamaIndex to awesome-transformers.md by <a href="https://github.com/ravi03071991"><code>@​ravi03071991</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23484">#23484</a></li> <li>add cleanlab to awesome-transformers tools list by <a href="https://github.com/jwmueller"><code>@​jwmueller</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23440">#23440</a></li> </ul> <h2>4-bit quantization and QLoRA</h2> <p>By leveraging the <code>bitsandbytes</code> library by <a href="https://github.com/TimDettmers"><code>@​TimDettmers</code></a>, we add 4-bit support to <code>transformers</code> models!</p> <ul> <li>4-bit QLoRA via bitsandbytes (4-bit base model + LoRA) by <a href="https://github.com/TimDettmers"><code>@​TimDettmers</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23479">#23479</a></li> </ul> <h2>Agents</h2> <p>The Agents framework has been improved and continues to be stabilized. Among bug fixes, here are the important new features that were added:</p> <ul> <li>Local agent capabilities, to load a generative model directly from <code>transformers</code> instead of relying on APIs.</li> <li>Prompts are now hosted on the Hub, which means that anyone can fork the prompts and update them with theirs, to let other community contributors re-use them</li> <li>We add an <code>AzureOpenAiAgent</code> class to support Azure OpenAI agents.</li> </ul> <ul> <li>Add local agent by <a href="https://github.com/sgugger"><code>@​sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23438">#23438</a></li> <li>Enable prompts on the Hub by <a href="https://github.com/sgugger"><code>@​sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23662">#23662</a></li> <li>Add AzureOpenAiAgent by <a href="https://github.com/sgugger"><code>@​sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/24058">#24058</a></li> </ul> <h2>Safetensors</h2> <p>The <code>safetensors</code> library is a safe serialization framework for machine learning tensors. It has been audited and will become the default serialization framework for several organizations (Hugging Face, EleutherAI, Stability AI).</p> <p>It has now become a core dependency of <code>transformers</code>.</p> <ul> <li>Making <code>safetensors</code> a core dependency. by <a href="https://github.com/Narsil"><code>@​Narsil</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23254">#23254</a></li> </ul> <h2>New models</h2> <h3>Swiftformer</h3> <p>The SwiftFormer paper introduces a novel efficient additive attention mechanism that effectively replaces the quadratic matrix multiplication operations in the self-attention computation with linear element-wise multiplications. A series of models called ‘SwiftFormer’ is built based on this, which achieves state-of-the-art performance in terms of both accuracy and mobile inference speed. Even their small variant achieves 78.5% top-1 ImageNet1K accuracy with only 0.8 ms latency on iPhone 14, which is more accurate and 2× faster compared to MobileViT-v2.</p> <ul> <li>Add swiftformer by <a href="https://github.com/shehanmunasinghe"><code>@​shehanmunasinghe</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/22686">#22686</a></li> </ul> <h3>Autoformer</h3> <p>This model augments the Transformer as a deep decomposition architecture, which can progressively decompose the trend and seasonal components during the forecasting process.</p> <ul> <li>[Time-Series] Autoformer model by <a href="https://github.com/elisim"><code>@​elisim</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/21891">#21891</a></li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/huggingface/transformers/commit/fe861e578f50dc9c06de33cd361d2f625017e624"><code>fe861e5</code></a> [<code>GPT2</code>] Add correct keys on <code>_keys_to_ignore_on_load_unexpected</code> on all chil...</li> <li><a href="https://github.com/huggingface/transformers/commit/b3e27a80578d022301611363b890107244e12354"><code>b3e27a8</code></a> Update the pin on Accelerate (<a href="https://redirect.github.com/huggingface/transformers/issues/24110">#24110</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/53e1f5cf66d320b9c809f3940c707b6fef435d2d"><code>53e1f5c</code></a> [<code>Trainer</code>] Correct behavior of <code>_load_best_model</code> for PEFT models (<a href="https://redirect.github.com/huggingface/transformers/issues/24103">#24103</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/17db177714b03103bb94cd71b7dd414bc63bffd5"><code>17db177</code></a> reset accelerate env variables after each test (<a href="https://redirect.github.com/huggingface/transformers/issues/24107">#24107</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/905892f09027cab690918c7766fea1bb51bcdd26"><code>905892f</code></a> Release: v4.30.0</li> <li><a href="https://github.com/huggingface/transformers/commit/c3572e6bfba13ce6dc3fedb05cd1a946ea109576"><code>c3572e6</code></a> Add AzureOpenAiAgent (<a href="https://redirect.github.com/huggingface/transformers/issues/24058">#24058</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/5eb3d3c7023ed0522d3c743ee2e13d896a3aa788"><code>5eb3d3c</code></a> Up pinned accelerate version (<a href="https://redirect.github.com/huggingface/transformers/issues/24089">#24089</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/d1c039e39864a41f6eb8b770a65f123c40164ea5"><code>d1c039e</code></a> fix accelerator prepare during eval only mode (<a href="https://redirect.github.com/huggingface/transformers/issues/24014">#24014</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/2c887cf8e0cb1ac96d28361ff3235a77f83c36ee"><code>2c887cf</code></a> Do not prepare lr scheduler as it as the right number of steps (<a href="https://redirect.github.com/huggingface/transformers/issues/24088">#24088</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/12298cb65c7e9d615b749dde935a0b4966f4ae49"><code>12298cb</code></a> fix executable batch size issue (<a href="https://redirect.github.com/huggingface/transformers/issues/24067">#24067</a>)</li> <li>Additional commits viewable in <a href="https://github.com/huggingface/transformers/compare/v3.5.1...v4.30.0">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=transformers&package-manager=pip&previous-version=3.5.1&new-version=4.30.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
[ 21 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ "dependencies" ]
https://api.github.com/repos/huggingface/transformers/issues/22786
TITLE Implement a decode method in transformers.BasicTokenizer COMMENTS 3 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### Feature request Transformers has provided a nice BasicTokenizer for basic tokenizing when we don't need BPE tokenizers. For data processing (like data format converting), it is better to offer a decode method for basic use. ### Motivation When doing data format converting in some data processing problems, we usually meet the requirement to recover a list of tokens into continuous, readable text. ### Your contribution None.
[ 19 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Feature request" ]
https://api.github.com/repos/huggingface/transformers/issues/23036
TITLE [New model] Bark for realistic text-to-speech COMMENTS 3 REACTIONS +1: 1 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### Model description As stated in their [README](https://github.com/suno-ai/bark/blob/main/README.md): > Bark is a transformer-based text-to-audio model created by [Suno](https://suno.ai/). Bark can generate highly realistic, multilingual speech as well as other audio - including music, background noise and simple sound effects. The model can also produce nonverbal communications like laughing, sighing and crying. To support the research community, we are providing access to pretrained model checkpoints ready for inference. Some of their demos are quite amazing (albeit slightly creepy), being able to add "uhms" and "ahhs" in the synthesized audio. For example: ``` Hello, my name is Suno. And, uh — and I like pizza. [laughs] But I also have other interests such as playing tic tac toe. ``` https://user-images.githubusercontent.com/34592747/238155864-cfa98e54-721c-4b9c-b962-688e09db684f.webm ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation GitHub repo: https://github.com/suno-ai/bark Author: @gkucsko Demo: https://huggingface.co/spaces/suno/bark Model weights: Although not very well documented, [here](https://github.com/suno-ai/bark/blob/2c12023eb22868a633b76357b69d657b374736d9/bark/generation.py#L92-L119) is the portion of the code which links to the model weights. @Vaibhavs10 also looks to have uploaded them to the HF Hub [here](https://huggingface.co/reach-vb/bark-small) 🔥
[ 20 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ "New model" ]
https://api.github.com/repos/huggingface/transformers/issues/22240
TITLE Add InternImage COMMENTS 4 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### Model description InternImage is a new large-scale CNN-based foundation model, which can obtain the gain from increasing parameters and training data like ViTs. Different from the recent CNNs that focus on large dense kernels, InternImage takes deformable convolution as the core operator, so that this model not only has the large effective receptive field required for downstream tasks such as detection and segmentation, but also has the adaptive spatial aggregation conditioned by input and task information. InternImage-H achieved a new record 65.4 mAP on COCO test-dev and 62.9 mIoU on ADE20K, outperforming current leading CNNs and ViTs. It is worth noting that InternImage relies on a custom cuda operator, so if this causes problems for model addition, you can replace [the cuda operator](https://github.com/OpenGVLab/InternImage/blob/master/classification/ops_dcnv3/modules/dcnv3.py#L218) with [a pytorch implementation](https://github.com/OpenGVLab/InternImage/blob/master/classification/ops_dcnv3/modules/dcnv3.py#L91). In fact, we have already submitted [a version of the code on transformers](https://huggingface.co/OpenGVLab/internimage_t_1k_224/tree/main), however, due to security reasons, the code we submitted cannot call your web inference api, so we would like you to add InternImage to transformers. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation https://github.com/OpenGVLab/InternImage
[ 20 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ "New model" ]
https://api.github.com/repos/huggingface/transformers/issues/24849
TITLE unscale_() has already been called on this optimizer since the last update(). COMMENTS 4 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY Hi all, I'm facing the error in the subject. I saw this problem have been already solved but I still have this. This is how I configured the parameters for the trainer. ``` trainer = transformers.Trainer( model=model, # model is decapoda-research/llama-7b-hf train_dataset=data["train"], args=transformers.TrainingArguments( per_device_train_batch_size=MICRO_BATCH_SIZE, # 4 micro batch size gradient_accumulation_steps=GRADIENT_ACCUMULATION_STEPS, # 16 auto_find_batch_size=False, # set True to avoid unscale() problem warmup_steps=100, num_train_epochs=EPOCHS, #2 epochs learning_rate=LEARNING_RATE, # 3e-4 fp16=True, logging_steps=20, optim="adamw_torch", output_dir=NAME, save_total_limit=3, save_strategy="steps", save_steps=200, ), data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False), ) ``` The strange behaviour is that the problem raises after the end of the first epoch. ``` {'loss': 0.8378, 'learning_rate': 0.00016153846153846153, 'epoch': 0.99} 50%|███████████████████████████████████████████▌ | 831/1660 [15:57<6:52:51, 29.88s/it] Traceback (most recent call last): File "/home/paco/dev/stambecco/train.py", line 138, in <module> trainer.train(resume_from_checkpoint=checkpoint_flag) File "/home/paco/.local/lib/python3.10/site-packages/transformers/trainer.py", line 1539, in train return inner_training_loop( File "/home/paco/.local/lib/python3.10/site-packages/transformers/trainer.py", line 1850, in _inner_training_loop self.accelerator.clip_grad_norm_( File "/home/paco/.local/lib/python3.10/site-packages/accelerate/accelerator.py", line 1893, in clip_grad_norm_ self.unscale_gradients() File "/home/paco/.local/lib/python3.10/site-packages/accelerate/accelerator.py", line 1856, in unscale_gradients self.scaler.unscale_(opt) File "/home/paco/.local/lib/python3.10/site-packages/torch/cuda/amp/grad_scaler.py", line 275, in unscale_ raise RuntimeError("unscale_() has already been called on this optimizer since the last update().") RuntimeError: unscale_() has already been called on this optimizer since the last update(). 50%|█████ | 831/1660 [16:27<16:24, 1.19s/it] ``` ### System Info The environment is WSL `Linux 5.15.90.1-microsoft-standard-WSL2 #1 SMP Fri Jan 27 02:56:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux ` **pip list** ``` Package Version ------------------------ ------------- accelerate 0.20.3 aiohttp 3.8.4 aiosignal 1.3.1 async-timeout 4.0.2 attrs 23.1.0 bitsandbytes 0.39.1 blinker 1.4 certifi 2022.12.7 charset-normalizer 2.1.1 cmake 3.25.0 command-not-found 0.3 cryptography 3.4.8 datasets 2.13.0 dbus-python 1.2.18 dill 0.3.6 distro 1.7.0 distro-info 1.1build1 filelock 3.9.0 frozenlist 1.3.3 fsspec 2023.6.0 httplib2 0.20.2 huggingface-hub 0.15.1 idna 3.4 importlib-metadata 4.6.4 jeepney 0.7.1 Jinja2 3.1.2 keyring 23.5.0 launchpadlib 1.10.16 lazr.restfulclient 0.14.4 lazr.uri 1.0.6 lit 15.0.7 loralib 0.1.1 MarkupSafe 2.1.2 more-itertools 8.10.0 mpmath 1.2.1 multidict 6.0.4 multiprocess 0.70.14 netifaces 0.11.0 networkx 3.0 numpy 1.24.1 nvidia-cublas-cu11 11.10.3.66 nvidia-cuda-cupti-cu11 11.7.101 nvidia-cuda-nvrtc-cu11 11.7.99 nvidia-cuda-runtime-cu11 11.7.99 nvidia-cudnn-cu11 8.5.0.96 nvidia-cufft-cu11 10.9.0.58 nvidia-curand-cu11 10.2.10.91 nvidia-cusolver-cu11 11.4.0.1 nvidia-cusparse-cu11 11.7.4.91 nvidia-nccl-cu11 2.14.3 nvidia-nvtx-cu11 11.7.91 oauthlib 3.2.0 packaging 23.1 pandas 2.0.2 peft 0.4.0.dev0 Pillow 9.3.0 pip 22.0.2 psutil 5.9.5 pyarrow 12.0.1 PyGObject 3.42.1 PyJWT 2.3.0 pyparsing 2.4.7 python-apt 2.4.0+ubuntu1 python-dateutil 2.8.2 pytz 2023.3 PyYAML 5.4.1 regex 2023.6.3 requests 2.28.1 safetensors 0.3.1 scipy 1.10.1 SecretStorage 3.3.1 sentencepiece 0.1.99 setuptools 59.6.0 six 1.16.0 ssh-import-id 5.11 sympy 1.11.1 systemd-python 234 tokenizers 0.13.3 torch 2.0.1+cu117 torchaudio 2.0.2+cu117 torchvision 0.15.2+cu117 tqdm 4.65.0 transformers 4.31.0.dev0 triton 2.0.0 typing_extensions 4.4.0 tzdata 2023.3 ubuntu-advantage-tools 8001 ufw 0.36.1 unattended-upgrades 0.1 urllib3 1.26.13 wadllib 1.3.6 wheel 0.37.1 xxhash 3.2.0 yarl 1.9.2 zipp 1.0.0 ``` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` tokenizer = LlamaTokenizer.from_pretrained( BASE_MODEL, add_eos_token=True ) model = prepare_model_for_int8_training(model) print("Preparing LoRA weights") config = LoraConfig( r=LORA_R, lora_alpha=LORA_ALPHA, target_modules=["q_proj", "v_proj"], lora_dropout=LORA_DROPOUT, bias="none", task_type="CAUSAL_LM", ) model = get_peft_model(model, config) tokenizer.pad_token_id = 0 # We want this to be different from the eos token if DATA_PATH.endswith(".json") or DATA_PATH.endswith(".jsonl"): data = load_dataset("json", data_files=DATA_PATH) else: data = load_dataset(DATA_PATH) # Functions tokenize() and generate_prompt() read the json file with the following format: # { # "instruction": "", # "input": "", # "output": "" # }, data = data.shuffle().map(lambda x: tokenize(generate_prompt(x))) model.print_trainable_parameters() trainer = transformers.Trainer( model=model, train_dataset=data["train"], args=transformers.TrainingArguments( per_device_train_batch_size=MICRO_BATCH_SIZE, gradient_accumulation_steps=GRADIENT_ACCUMULATION_STEPS, auto_find_batch_size=False, warmup_steps=100, num_train_epochs=EPOCHS, learning_rate=LEARNING_RATE, fp16=True, logging_steps=20, optim="adamw_torch", output_dir=NAME, save_total_limit=3, save_strategy="steps", save_steps=200, ), data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False), ) model.config.use_cache = False checkpoint_folder = os.path.join(os.getcwd(), NAME) # check if the checkpoint folder exists and is not empty checkpoint_flag = os.path.isdir(checkpoint_folder) and len(os.listdir(checkpoint_folder))> 0 print(f"Does a checkpoint folder exists? {checkpoint_flag}\n") trainer.train(resume_from_checkpoint=checkpoint_flag) model.save_pretrained(f"models/{NAME}") ``` ### Expected behavior Not raising the error and continue with the epoch #2
[ 25 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0 ]
[ "solved" ]
https://api.github.com/repos/huggingface/transformers/issues/24679
TITLE Custom vision encoder-decoder problem COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### Model description I'm trying to make a custom vision encoder-decoder model. I want to use pre-trained encoder but use decoder from scratch, So I cannot use `VisionEncoderDecoderModel.from_pretrained()`. Specifically, I want to use pre-trained `deit` model as a encoder, and custom trained `Electra` as a decoder. I write code like below. In train step, there is no problem. But I got a problem which says "model have no attribute 'generate'". How can I implement or import `generate` function? ``` class CustomEncoderDecoderModel(nn.Module): config_class = VisionEncoderDecoderConfig def __init__(self, encoder_name, decoder_config, config=None): super(CustomEncoderDecoderModel, self).__init__() self.encoder = AutoModel.from_pretrained(encoder_name) self.decoder_config = decoder_config self.decoder = AutoModelForCausalLM.from_config(self.decoder_config) self.config = VisionEncoderDecoderConfig.from_encoder_decoder_configs(self.encoder.config, self.decoder.config) self.criterion = nn.CrossEntropyLoss() self.enc_to_dec_proj = nn.Linear(self.encoder.config.hidden_size, self.decoder.config.hidden_size) def forward(self, pixel_values, labels, decoder_input_ids=None, decoder_input_embeds=None, decoder_attention_mask=None, decoder_inputs_embeds=None, past_key_values=None): encoder_outputs = self.encoder(pixel_values, output_attentions=True) encoder_hidden_states = encoder_outputs[0] encoder_attention_mask = None if decoder_input_ids is None and decoder_input_embeds is None: decoder_input_ids = shift_tokens_right( labels, self.decoder.config.pad_token_id, decoder_start_token_id=2 ) if self.encoder.config.hidden_size != self.decoder.config.hidden_size: encoder_hidden_states = self.enc_to_dec_proj(encoder_hidden_states) decoder_outputs = self.decoder( input_ids = decoder_input_ids, attention_mask = decoder_attention_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, inputs_embeds=decoder_inputs_embeds, output_attentions=True, use_cache=True, past_key_values=past_key_values, ) logits = decoder_outputs[0] loss = self.criterion(logits.reshape(-1, self.decoder.config.vocab_size), labels.reshape(-1)) return {'loss': loss, 'logits': logits, 'past_key_values': decoder_outputs.past_key_values, 'decoder_hidden_states': decoder_outputs.hidden_states, 'decoder_attentions': decoder_outputs.attentions, 'cross_attentions': decoder_outputs.cross_attentions, 'encoder_hidden_state': encoder_outputs.hidden_states, 'encoder_attentions': encoder_attention_mask, 'encoder_attentions': encoder_outputs.attentions, } ``` ### Open source status - [ ] The model implementation is available - [ ] The model weights are available ### Provide useful links for the implementation _No response_
[ 20 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ "New model" ]
https://api.github.com/repos/huggingface/transformers/issues/22156
TITLE [i18n-it] Translating docs to it COMMENTS 0 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY <!-- Note: Please search to see if an issue already exists for the language you are trying to translate. --> Hi! Let's bring the documentation to all the <languageName>-speaking community 🌐 (currently 0 out of 267 complete) Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list. Some notes: * Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗). * Please translate in a gender-neutral way. * Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source). * Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml). * Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @ArthurZucker, @sgugger for review. * 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/). ## Get Started section - [ ] [index.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.mdx) https://github.com/huggingface/transformers/pull/20180 - [ ] [quicktour.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.mdx) (waiting for initial PR to go through) - [ ] [installation.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.mdx). ## Tutorial section - [ ] [pipeline_tutorial.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.mdx) - [ ] [autoclass_tutorial.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.mdx) - [ ] [preprocessing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.mdx) - [ ] [training.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.mdx) - [ ] [accelerate.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.mdx) - [ ] [model_sharing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.mdx) - [ ] [multilingual.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.mdx) <!-- Keep on adding more as you go 🔥 --> ## How-to Guide - [] [big_models](https://github.com/huggingface/transformers/blob/main/docs/source/en/big_models.mdx) ## How-to guides - [] [big_models](https://github.com/huggingface/transformers/blob/main/docs/source/en/big_models.mdx)
[ 8 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "WIP" ]
https://api.github.com/repos/huggingface/transformers/issues/22490
TITLE Adding a skip_special_tokens Parameter to .encode() in Transformers COMMENTS 2 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### Feature request I would like to propose adding a skip_special_tokens parameter to the .encode() method in Transformers. Currently, in order to achieve this behavior, I have to either create two different tokenizers or use a workaround such as inserting a character in the middle of a special token and then removing it to simulate the desired behavior. ### Motivation The motivation for this feature request is that in real-world scenarios, users may enter any type of textual data, including special tokens used by the tokenizer. If the tokenizer were to tokenize the user's input as is, it would cause confusion for the whole model and impact the performance of the product. The skip_special_tokens parameter is essential for ensuring the correct processing of user inputs, not just for the `decode()` method but also for the `encode()` and `__call__()` methods. ### Your contribution I have implemented my own tokenizer that inherits from Transformers and simulates this behavior by removing the special tokens from the vocab before encoding. However, I believe this approach **would not be efficient** for scaling up, as it would cause a lot of memory allocations and deallocations. To address this issue, I suggest implementing **two separate dictionaries**, one for special tokens and one for the vocabulary, and incorporating an if-statement to test for the skip_special_tokens parameter. This would make the implementation performant and efficient. Thank you for considering this feature request.
[ 19 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Feature request" ]
https://api.github.com/repos/huggingface/transformers/issues/24841
TITLE Support for caching prompt hidden states through multiple calls of `generate()` COMMENTS 6 REACTIONS +1: 1 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### Feature request Hi there, I'd like to be able to re-use the hidden states for a common (potentially long) prompt across multiple calls to `model.generate()` in order to reduce redundant computation. Here is how I envision a final API, though I'm sure there are multiple ways to do it. ```python # Load stuff model = AutoModel.from_pretrained('huggyllama/llama-7b') tokenizer = AutoTokenizer.from_pretrained('huggyllama/llama-7b') # Common prompt that we'd prepend to every example prompt = "This is a common prompt in every example." prompt_ids = tokenizer(prompt, return_tensors='pt') # Examples to pass to generate examples = ["Ackbar went to", "Billaba enjoys", "Cody is eating some"] # Generation loop outputs = [] prompt_hidden_state = None for ex in examples: # Current way of doing things out = model.generate( **tokenizer(prompt + ex, return_tensors='pt'), ) # Proposed method to re-use prompt_hidden_state out = model.generate( **tokenizer(x, return_tensors='pt'), common_prompt_ids=prompt_ids, prompt_hidden_state=prompt_hidden_state ) prompt_hidden_state = out.prompt_hidden_state outputs.append(out.sequences) ``` Thanks in advance. ### Motivation A very common pattern for LLM usage is having a common prompt (e.g., instructions and input/output pairs), a sample input, and asking it to generate the sample output. For example: ``` You are a programmer's assistant which converts English descriptions to Python functions. English: <example 1 description> Python: <example 1 function> English: <example 2 description> Python: <example 2 function> English: <example 3 description> Python: <example 3 function> English: <input description> Python: ``` I'd like to be able to cache the common part of the prompt across inputs, that is, everything before `<input description>` which appears in every example to avoid potentially expensive re-computation. ### Your contribution The only existing info I could find is the short discussion [here](https://discuss.huggingface.co/t/avoid-recalculating-hidden-states-between-generate-calls/34209). I tried messing around a bit to get this to work but had little luck. I'm not familiar with the inner-workings of `transformers` and ran into numerous errors. One problem is padding, which if we're using left padding, can cause some misalignment with the prompt hidden states, e.g.: ``` <p> <p> <p> common prompt x_1 x_2 x_3 <p> <p> common prompt x_1 x_2 x_3 x_4 <p> <p> <p> <p> common prompt x_1 x_2 ``` I don't know the best way to solve this. Do we dynamically pad every tensor in `past_key_values`? That seems slow but I don't know if it actually is. If someone can suggest a better/easier way or maybe give some more pointers on how to solve padding. I'd be happy to try again myself. Thanks in advance.
[ 8 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "WIP" ]
https://api.github.com/repos/huggingface/transformers/issues/22315
TITLE Add MegatronT5 COMMENTS 0 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 2 rocket: 2 eyes: 0 BODY ### Model description In NeMo Megatron, the T5 model is available, but there is currently no MegatronT5 class for huggingface, such as MegatronBERT or MegatronGPT2. I have recently finished the porting work and have tested the model internally. I would like to share this model with the community. ### Open source status - [X] The model implementation is available - [ ] The model weights are available ### Provide useful links for the implementation - [NeMo Megatron models](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/nlp/nemo_megatron/intro.html) - [NeMo](https://github.com/NVIDIA/NeMo) - [Megatron-LM T5 model](https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/t5_model.py)
[ 20 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ "New model" ]
https://api.github.com/repos/huggingface/transformers/issues/24063
TITLE Add option for `trust_remote_code=True` on transformers-cli download COMMENTS 3 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### Feature request Currently is very convenient to download models using `transformers-cli download`, however some models need an extra argument for `trust_remote_code=True` for example `transformers-cli download "tiiuae/falcon-40b"` ### Motivation Would it make sense to add `transformers-cli download "tiiuae/falcon-40b" --trust_remote_code` ### Your contribution PR
[ 19 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Feature request" ]
https://api.github.com/repos/huggingface/transformers/issues/22317
TITLE Add `MegatronT5ForConditionalGeneration` COMMENTS 7 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY # What does this PR do? This PR adds the `MegatronT5ForConditionalGeneration` class, which among standard applications can be used for pretrained T5 model from NVIDIA NeMo MegatronT5 :) I also add converting script from NeMo MegatronT5 to Huggingface MegatronT5ForConditionalGeneration model <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #22315 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. cc @ArthurZucker and @younesbelkada <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
[ 17 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "PR for Model Addition" ]
https://api.github.com/repos/huggingface/transformers/issues/22885
TITLE KeyError: eval_loss when using Trainer (SpeechT5 fine-tuning) COMMENTS 3 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info current main branch of Transformers (4.29.0.dev0, 20 Apr 2023) ### Who can help? @hollance ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction We recently published a Colab notebook for fine-tuning SpeechT5 for TTS. https://colab.research.google.com/drive/1i7I5pzBcU3WDFarDnzweIj4-sVVoIUFJ This notebook worked fine previously but now it gives an error in `trainer.py` because the `eval_loss` is not part of the metrics. This happens when saving the checkpoint. The progress bar in the notebook shows "No log" for the Validation Loss. I will look into this issue myself first and try to get a smaller reproducible case. My hunch is that something changed in Trainer in between the time I wrote the notebook and now (for example, it now requires Accelerate). ### Expected behavior The notebook should work as before.
[ 27 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1 ]
[ "bug" ]
https://api.github.com/repos/huggingface/transformers/issues/24065
TITLE Add CPMBee model COMMENTS 4 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY # What does this PR do? Adds the [CPM-Bee](https://github.com/OpenBMB/CPM-Bee/tree/main) pytorch model. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
[ 7 ]
[ 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Model on the Hub" ]
https://api.github.com/repos/huggingface/transformers/issues/24304
TITLE SpikeGPT COMMENTS 5 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### Feature request Extract the spiking nature of the LLM and port that [set] of features over for training/inference,. https://github.com/ridgerchu/SpikeGPT ### Motivation the benefits would result in more efficient computational costs (x22 reduction). ### Your contribution I am willing to test, trace down bugs, and push. I'm still new in the world of llm backend coding.
[ 20 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ "New model" ]
https://api.github.com/repos/huggingface/transformers/issues/22821
TITLE set fsdp and bf16 don't save memory COMMENTS 4 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info - `transformers` version: 4.28.0 - Platform: Linux-5.4.0-122-generic-x86_64-with-glibc2.31 - Python version: 3.9.12 - Huggingface_hub version: 0.13.4 - Safetensors version: not installed - PyTorch version (GPU?): 1.13.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script? yes - Using distributed or parallel set-up in script? yes ### Who can help? @ArthurZucker @sgu ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. download the dataset ``` lang = "Python" import subprocess subprocess.call(["wget", f"https://s3.amazonaws.com/code-search-net/CodeSearchNet/v2/{lang}.zip"]) subprocess.call(["unzip", f"/content/{lang}.zip"]) !mkdir "log" log_dir = "/content/log" !mkdir "data" data_dir = "/content/data" !mkdir "model" model_dir = "/content/model" !mkdir "tokenizer" tokenizer_dir = "/content/tokenizer" ``` 2. data preprocess ``` import os import json import torch from pathlib import Path from transformers import (Trainer, pipeline, RobertaConfig, TrainingArguments, RobertaForMaskedLM, RobertaTokenizerFast, LineByLineTextDataset, DataCollatorForLanguageModeling) from tokenizers import ByteLevelBPETokenizer from tokenizers.processors import BertProcessing from tokenizers.implementations import ByteLevelBPETokenizer def prepare_text(dir_path): for path in os.listdir(dir_path): os.system(f"gunzip -k {dir_path}/{path}") texts = "" for path in os.listdir(dir_path): if path.endswith(".jsonl"): with open(dir_path + "/" + path, 'r') as f: sample_file = f.readlines() for sample in sample_file: obj = json.loads(sample) texts += obj["original_string"].replace("\n", "").replace("\t", "") + "\n" return texts train1_texts = prepare_text(f"/content/{lang}/final/jsonl/train") train2_texts = prepare_text(f"/content/{lang}/final/jsonl/valid") train_texts = train1_texts + "\n" + train2_texts valid_texts = prepare_text(f"/content/{lang}/final/jsonl/test") for path, text in zip(["train_texts.txt", "valid_texts.txt"], [train_texts, valid_texts]): with open(f"{data_dir}/{path}","w") as f: f.write(text) ``` 3. Train a tokenizer ``` paths = [str(x) for x in Path(f"{data_dir}/").glob("**/*.txt")] tokenizer = ByteLevelBPETokenizer() tokenizer.train(files=paths, vocab_size=52_000, min_frequency=2, special_tokens=[ "<s>", "<pad>", "</s>", "<unk>", "<mask>", ]) tokenizer.save_model(tokenizer_dir) tokenizer = ByteLevelBPETokenizer( "tokenizer/vocab.json", "tokenizer/merges.txt", ) tokenizer._tokenizer.post_processor = BertProcessing( ("</s>", tokenizer.token_to_id("</s>")), ("<s>", tokenizer.token_to_id("<s>")), ) tokenizer.enable_truncation(max_length=512) ``` 4. Build model ``` config = RobertaConfig( vocab_size=52_000, max_position_embeddings=514, num_attention_heads=12, num_hidden_layers=6, type_vocab_size=1, ) tokenizer = RobertaTokenizerFast.from_pretrained(tokenizer_dir, max_len=512) model = RobertaForMaskedLM(config=config) model.num_parameters() train_dataset = LineByLineTextDataset( tokenizer=tokenizer, file_path=f"{data_dir}/train_texts.txt", block_size=128, ) test_dataset = LineByLineTextDataset( tokenizer=tokenizer, file_path=f"{data_dir}/valid_texts.txt", block_size=128, ) data_collator = DataCollatorForLanguageModeling( tokenizer=tokenizer, mlm=True, mlm_probability=0.15 ) training_args = TrainingArguments( output_dir=model_dir, overwrite_output_dir=True, num_train_epochs=4, per_gpu_train_batch_size=64, save_steps=5000, do_eval=True, logging_dir=log_dir, ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=train_dataset, eval_dataset = test_dataset ) trainer.train() trainer.save_model(model_dir) tokenizer.save_pretrained(tokenizer_dir) ``` ### Expected behavior before set fsdp and bf16: ``` training_args = TrainingArguments( output_dir=model_dir, overwrite_output_dir=True, num_train_epochs=4, per_gpu_train_batch_size=64, save_steps=5000, do_eval=True, logging_dir=log_dir, ) ``` <img width="417" alt="Snipaste_2023-04-18_15-42-22" src="https://user-images.githubusercontent.com/41561936/232707188-2579965b-92fd-4ba6-87de-b82ca948ec54.png"> after set fsdp and bf16: ``` training_args = TrainingArguments( output_dir=model_dir, overwrite_output_dir=True, num_train_epochs=4, per_gpu_train_batch_size=64, save_steps=5000, do_eval=True, logging_dir=log_dir, fsdp=True, bf16=True, ) ``` <img width="415" alt="Snipaste_2023-04-18_15-42-45" src="https://user-images.githubusercontent.com/41561936/232707483-2b89c658-172d-4a23-a7fc-fe40cd1dfe83.png"> The memory usage is not much different and does not achieve the desired effect. Why? I also try to set `per_gpu_train_batch_size=4` when `fsdp=True, bf16=True`: <img width="426" alt="Snipaste_2023-04-18_15-49-23" src="https://user-images.githubusercontent.com/41561936/232708818-efa676d9-4e6b-440a-b0e0-e66e54026da5.png"> Compared with the results of the previous set of experiments, the increase of memory usage is much greater than the increase of batch size. Why?
[ 24 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0 ]
[ "trainer" ]
https://api.github.com/repos/huggingface/transformers/issues/24031
TITLE Add scGPT Model COMMENTS 4 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 1 rocket: 0 eyes: 0 BODY ### Model description scGPT is a single celled foundation model, based off the GPT architecture. The model is shown to have captured meaningful biological insights into cells and genes. The authors state the model can be fine tuned to downstream tasks included, cell-type annotation, genetic perturbation etc. I'd like to add scGPT to HuggingFace Transformers. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation The paper [scGPT: Towards Building a Foundation Model for Single-Cell 2 Multi-omics Using Generative AI](https://www.biorxiv.org/content/10.1101/2023.04.30.538439v1.full.pdf) by [Haotian Cui](https://www.researchgate.net/scientific-contributions/Haotian-Cui-2193100667), [Chloe Wang](https://www.linkedin.com/in/chloe-xueqi-wang-979712158/?originalSubdomain=ca) , [Hassaan Maan](https://hsmaan.com/), [Bo Wang](https://bowang87.github.io/) Github link: [scGPT by subercui](https://github.com/bowang-lab/scGPT) Model Checkpoint: [Google Drive](https://drive.google.com/drive/folders/1kkug5C7NjvXIwQGGaGoqXTk_Lb_pDrBU) - From this checkpoint I can generate the model weights
[ 20 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ "New model" ]
https://api.github.com/repos/huggingface/transformers/issues/22328
TITLE PyTorch/XLA FSDP doesn't seem to work on TPU-v3-8 VM COMMENTS 2 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info GCP TPU-v3-8 VM Operating System: Ubuntu 20.04.4 LTS Kernel: Linux 5.13.0-1027-gcp transformers 4.28.0.dev0 (pip install git+https://github.com/huggingface/transformers.git on 03/22/2023) torch 2.0.0 torch-xla 2.0 ### Who can help? People from #21406 that is @AlexWertheim, possibly @pacman100 and @ArthurZucker ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction The [glue example with Trainer for TPUs](https://github.com/huggingface/transformers/tree/main/examples/pytorch#running-on-tpus) without FSTP worked flawlessly in my TPU-v3-8 VM with xlm-roberta-base (because the model and batch fit properly within each core). Now that FSTP was integrated thanks to @AlexWertheim, I tried running facebook/xlm-roberta-xl on this example with the additional parameters. ```bash python xla_spawn.py --num_cores 8 \ run_glue.py \ --model_name_or_path facebook/xlm-roberta-xl \ --task_name mnli \ --do_train \ --max_seq_length 128 \ --per_device_train_batch_size 4 \ --learning_rate 2e-5 \ --num_train_epochs 10.0 \ --output_dir mnli_output \ --report_to all \ --fsdp 'shard_grad_op' \ --fsdp_config '../fstp_config.json' \ --debug 'tpu_metrics_debug' \ --logging_steps 100 \ --gradient_accumulation_steps 8 ``` fstp_config.json: ```json { "fsdp_min_num_params": 10000000, "xla": true, "xla_fsdp_settings": {} } ``` I also tried using `"fsdp_transformer_layer_cls_to_wrap": ["XLMRobertaXLModel","XLMRobertaXLClassificationHead"]` instead of `"fsdp_min_num_params": 10000000`. Also `full_shard` instead of `shard_grad_op` and some other variations but they're all giving me the following error: ```bash 0%| | 1/3068000 [08:09<416756:07:35, 489.02s/it]2023-03-23 02:02:19.905715: W tensorflow/core/framework/op_kernel.cc:1830] OP_REQUIRES failed at tpu_execute_op.cc:266 : RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable. 2023-03-23 02:02:22.081681: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] StackTrace: 2023-03-23 02:02:22.081762: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] *** Begin stack trace *** 2023-03-23 02:02:22.081770: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] tsl::CurrentStackTrace() 2023-03-23 02:02:22.081777: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] xla::util::ReportComputationError(tsl::Status const&, absl::lts_20220623::Span<xla::XlaComputation const* const>, absl::lts_20220623::Span<xla::Shape const* const>) 2023-03-23 02:02:22.081783: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] xla::XrtComputationClient::ExecuteComputation(xla::ComputationClient::Computation const&, absl::lts_20220623::Span<std::shared_ptr<xla::ComputationClient::Data> const>, std::string const&, xla::ComputationClient::ExecuteComputationOptions const&) 2023-03-23 02:02:22.081790: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] torch_xla::XlaBackendImpl::ExecuteComputation(std::shared_ptr<torch::lazy::Computation>, c10::ArrayRef<std::shared_ptr<torch::lazy::BackendData> >, torch::lazy::BackendDevice const&) const 2023-03-23 02:02:22.081809: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] 2023-03-23 02:02:22.081818: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] torch::lazy::MultiWait::Complete(std::function<void ()> const&) 2023-03-23 02:02:22.081825: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] 2023-03-23 02:02:22.081831: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] 2023-03-23 02:02:22.081836: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] 2023-03-23 02:02:22.081842: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] clone 2023-03-23 02:02:22.081847: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] *** End stack trace *** 2023-03-23 02:02:22.081854: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] 2023-03-23 02:02:22.081862: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] Status: RESOURCE_EXHAUSTED: From /job:localservice/replica:0/task:0: 2023-03-23 02:02:22.081870: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] 2 root error(s) found. 2023-03-23 02:02:22.081878: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] (0) RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable. 2023-03-23 02:02:22.081891: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] [[{{node XRTExecute}}]] 2023-03-23 02:02:22.081898: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. 2023-03-23 02:02:22.081905: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] 2023-03-23 02:02:22.081911: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] [[XRTExecute_G10]] 2023-03-23 02:02:22.081920: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. 2023-03-23 02:02:22.081928: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] 2023-03-23 02:02:22.081937: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] (1) RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable. 2023-03-23 02:02:22.081944: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] [[{{node XRTExecute}}]] 2023-03-23 02:02:22.081951: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. 2023-03-23 02:02:22.081959: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] 2023-03-23 02:02:22.081967: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] 0 successful operations. 2023-03-23 02:02:22.081975: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] 0 derived errors ignored. 2023-03-23 02:02:22.081983: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] Recent warning and error logs: 2023-03-23 02:02:22.081989: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] OP_REQUIRES failed at tpu_execute_op.cc:266 : RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable. /home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/transformers/optimization.py:391: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning warnings.warn( Exception in device=TPU:1: RESOURCE_EXHAUSTED: From /job:localservice/replica:0/task:0: 2 root error(s) found. (0) RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable. [[{{node XRTExecute}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. [[XRTExecute_G10]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. (1) RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable. [[{{node XRTExecute}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. 0 successful operations. 0 derived errors ignored. Recent warning and error logs: OP_REQUIRES failed at tpu_execute_op.cc:266 : RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable. Traceback (most recent call last): File "/home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 334, in _mp_start_fn _start_fn(index, pf_cfg, fn, args) File "/home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 328, in _start_fn fn(gindex, *args) File "/datadrive/test/run_glue.py", line 622, in _mp_fn main() File "/datadrive/test/run_glue.py", line 534, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/transformers/trainer.py", line 1644, in train return inner_training_loop( File "/home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/transformers/trainer.py", line 1881, in _inner_training_loop for step, inputs in enumerate(epoch_iterator): File "/home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/torch_xla/distributed/parallel_loader.py", line 30, in __next__ return self.next() File "/home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/torch_xla/distributed/parallel_loader.py", line 42, in next xm.mark_step() File "/home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/torch_xla/core/xla_model.py", line 949, in mark_step torch_xla._XLAC._xla_step_marker( RuntimeError: RESOURCE_EXHAUSTED: From /job:localservice/replica:0/task:0: 2 root error(s) found. (0) RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable. [[{{node XRTExecute}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. [[XRTExecute_G10]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. (1) RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable. [[{{node XRTExecute}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. 0 successful operations. 0 derived errors ignored. Recent warning and error logs: OP_REQUIRES failed at tpu_execute_op.cc:266 : RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable. 2023-03-23 02:02:23.050198: W tensorflow/core/framework/op_kernel.cc:1830] OP_REQUIRES failed at tpu_execute_op.cc:266 : RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable. https://symbolize.stripped_domain/r/?trace=7f7627be9376,7f7627bee41f,0&map= *** SIGTERM received by PID 89268 (TID 89268) on cpu 51 from PID 89123; stack trace: *** PC: @ 0x7f7627be9376 (unknown) pthread_cond_wait@@GLIBC_2.3.2 @ 0x7f74d8c2aa1a 1152 (unknown) @ 0x7f7627bee420 (unknown) (unknown) @ 0x1 (unknown) (unknown) https://symbolize.stripped_domain/r/?trace=7f7627be9376,7f74d8c2aa19,7f7627bee41f,0&map=ceee8fa20ddf9c34af43f587221e91de:7f74cbd02000-7f74d8e41840 E0323 02:02:23.479201 89268 coredump_hook.cc:360] RAW: Remote crash gathering disabled for SIGTERM. E0323 02:02:24.172933 89268 process_state.cc:784] RAW: Raising signal 15 with default behavior 2023-03-23 02:02:25.056856: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] StackTrace: 2023-03-23 02:02:25.056942: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] *** Begin stack trace *** 2023-03-23 02:02:25.056952: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] tsl::CurrentStackTrace() 2023-03-23 02:02:25.056959: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] xla::util::ReportComputationError(tsl::Status const&, absl::lts_20220623::Span<xla::XlaComputation const* const>, absl::lts_20220623::Span<xla::Shape const* const>) 2023-03-23 02:02:25.056967: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] xla::XrtComputationClient::ExecuteComputation(xla::ComputationClient::Computation const&, absl::lts_20220623::Span<std::shared_ptr<xla::ComputationClient::Data> const>, std::string const&, xla::ComputationClient::ExecuteComputationOptions const&) 2023-03-23 02:02:25.056976: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] torch_xla::XlaBackendImpl::ExecuteComputation(std::shared_ptr<torch::lazy::Computation>, c10::ArrayRef<std::shared_ptr<torch::lazy::BackendData> >, torch::lazy::BackendDevice const&) const 2023-03-23 02:02:25.056984: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] 2023-03-23 02:02:25.056997: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] torch::lazy::MultiWait::Complete(std::function<void ()> const&) 2023-03-23 02:02:25.057005: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] 2023-03-23 02:02:25.057011: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] 2023-03-23 02:02:25.057018: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] 2023-03-23 02:02:25.057025: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] clone 2023-03-23 02:02:25.057033: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] *** End stack trace *** 2023-03-23 02:02:25.057041: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] 2023-03-23 02:02:25.057050: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] Status: RESOURCE_EXHAUSTED: From /job:localservice/replica:0/task:0: 2023-03-23 02:02:25.057058: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] 2 root error(s) found. 2023-03-23 02:02:25.057067: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] (0) RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable. 2023-03-23 02:02:25.057075: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] [[{{node XRTExecute}}]] 2023-03-23 02:02:25.057085: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. 2023-03-23 02:02:25.057094: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] 2023-03-23 02:02:25.057102: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] [[XRTExecute_G12]] 2023-03-23 02:02:25.057111: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. 2023-03-23 02:02:25.057135: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] 2023-03-23 02:02:25.057143: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] (1) RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable. 2023-03-23 02:02:25.057151: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] [[{{node XRTExecute}}]] 2023-03-23 02:02:25.057160: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. 2023-03-23 02:02:25.057168: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] 2023-03-23 02:02:25.057176: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] 0 successful operations. 2023-03-23 02:02:25.057186: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] 0 derived errors ignored. 2023-03-23 02:02:25.057194: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] Recent warning and error logs: 2023-03-23 02:02:25.057202: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] OP_REQUIRES failed at tpu_execute_op.cc:266 : RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable. 2023-03-23 02:02:25.057209: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] OP_REQUIRES failed at tpu_execute_op.cc:266 : RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable. /home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/transformers/optimization.py:391: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning warnings.warn( Exception in device=TPU:6: RESOURCE_EXHAUSTED: From /job:localservice/replica:0/task:0: 2 root error(s) found. (0) RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable. [[{{node XRTExecute}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. [[XRTExecute_G12]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. (1) RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable. [[{{node XRTExecute}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. 0 successful operations. 0 derived errors ignored. Recent warning and error logs: OP_REQUIRES failed at tpu_execute_op.cc:266 : RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable. OP_REQUIRES failed at tpu_execute_op.cc:266 : RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable. Traceback (most recent call last): File "/home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 334, in _mp_start_fn _start_fn(index, pf_cfg, fn, args) File "/home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 328, in _start_fn fn(gindex, *args) File "/datadrive/test/run_glue.py", line 622, in _mp_fn main() File "/datadrive/test/run_glue.py", line 534, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/transformers/trainer.py", line 1644, in train return inner_training_loop( File "/home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/transformers/trainer.py", line 1881, in _inner_training_loop for step, inputs in enumerate(epoch_iterator): File "/home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/torch_xla/distributed/parallel_loader.py", line 30, in __next__ return self.next() File "/home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/torch_xla/distributed/parallel_loader.py", line 42, in next xm.mark_step() File "/home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/torch_xla/core/xla_model.py", line 949, in mark_step torch_xla._XLAC._xla_step_marker( RuntimeError: RESOURCE_EXHAUSTED: From /job:localservice/replica:0/task:0: 2 root error(s) found. (0) RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable. [[{{node XRTExecute}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. [[XRTExecute_G12]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. (1) RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable. [[{{node XRTExecute}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. 0 successful operations. 0 derived errors ignored. Recent warning and error logs: OP_REQUIRES failed at tpu_execute_op.cc:266 : RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable. OP_REQUIRES failed at tpu_execute_op.cc:266 : RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable. 2023-03-23 02:02:29.834867: W tensorflow/core/distributed_runtime/rpc/grpc_remote_master.cc:157] RPC failed with status = "UNAVAILABLE: Socket closed" and grpc_error_string = "{"created":"@1679536949.834650343","description":"Error received from peer ipv4:127.0.0.1:51011","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Socket closed","grpc_status":14}", maybe retrying the RPC 2023-03-23 02:02:29.835007: W tensorflow/core/distributed_runtime/rpc/grpc_remote_master.cc:157] RPC failed with status = "UNAVAILABLE: Socket closed" and grpc_error_string = "{"created":"@1679536949.834795697","description":"Error received from peer ipv4:127.0.0.1:51011","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Socket closed","grpc_status":14}", maybe retrying the RPC 2023-03-23 02:02:29.835038: W tensorflow/core/distributed_runtime/rpc/grpc_remote_master.cc:157] RPC failed with status = "UNAVAILABLE: Connection reset by peer" and grpc_error_string = "{"created":"@1679536949.834893793","description":"Error received from peer ipv4:127.0.0.1:51011","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Connection reset by peer","grpc_status":14}", maybe retrying the RPC 2023-03-23 02:02:29.835095: W tensorflow/core/distributed_runtime/rpc/grpc_remote_master.cc:157] RPC failed with status = "UNAVAILABLE: Connection reset by peer" and grpc_error_string = "{"created":"@1679536949.834956775","description":"Error received from peer ipv4:127.0.0.1:51011","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Connection reset by peer","grpc_status":14}", maybe retrying the RPC 2023-03-23 02:02:29.835197: W tensorflow/core/distributed_runtime/rpc/grpc_remote_master.cc:157] RPC failed with status = "UNAVAILABLE: Socket closed" and grpc_error_string = "{"created":"@1679536949.835008010","description":"Error received from peer ipv4:127.0.0.1:51011","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Socket closed","grpc_status":14}", maybe retrying the RPC 2023-03-23 02:02:29.835206: W tensorflow/core/distributed_runtime/rpc/grpc_remote_master.cc:157] RPC failed with status = "UNAVAILABLE: Connection reset by peer" and grpc_error_string = "{"created":"@1679536949.834976683","description":"Error received from peer ipv4:127.0.0.1:51011","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Connection reset by peer","grpc_status":14}", maybe retrying the RPC 2023-03-23 02:02:29.835408: W tensorflow/core/distributed_runtime/rpc/grpc_remote_master.cc:157] RPC failed with status = "UNAVAILABLE: Socket closed" and grpc_error_string = "{"created":"@1679536949.835235487","description":"Error received from peer ipv4:127.0.0.1:51011","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Socket closed","grpc_status":14}", maybe retrying the RPC 2023-03-23 02:02:29.835456: W tensorflow/core/distributed_runtime/rpc/grpc_remote_master.cc:157] RPC failed with status = "UNAVAILABLE: Connection reset by peer" and grpc_error_string = "{"created":"@1679536949.834964014","description":"Error received from peer ipv4:127.0.0.1:51011","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Connection reset by peer","grpc_status":14}", maybe retrying the RPC 2023-03-23 02:02:29.835480: W tensorflow/core/distributed_runtime/rpc/grpc_remote_master.cc:157] RPC failed with status = "UNAVAILABLE: Socket closed" and grpc_error_string = "{"created":"@1679536949.835338354","description":"Error received from peer ipv4:127.0.0.1:51011","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Socket closed","grpc_status":14}", maybe retrying the RPC 2023-03-23 02:02:29.835540: W tensorflow/core/distributed_runtime/rpc/grpc_remote_master.cc:157] RPC failed with status = "UNAVAILABLE: Connection reset by peer" and grpc_error_string = "{"created":"@1679536949.834899794","description":"Error received from peer ipv4:127.0.0.1:51011","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Connection reset by peer","grpc_status":14}", maybe retrying the RPC 2023-03-23 02:02:29.835614: W tensorflow/core/distributed_runtime/rpc/grpc_remote_master.cc:157] RPC failed with status = "UNAVAILABLE: Connection reset by peer" and grpc_error_string = "{"created":"@1679536949.834992684","description":"Error received from peer ipv4:127.0.0.1:51011","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Connection reset by peer","grpc_status":14}", maybe retrying the RPC 2023-03-23 02:02:29.835687: W tensorflow/core/distributed_runtime/rpc/grpc_remote_master.cc:157] RPC failed with status = "UNAVAILABLE: Connection reset by peer" and grpc_error_string = "{"created":"@1679536949.835345000","description":"Error received from peer ipv4:127.0.0.1:51011","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Connection reset by peer","grpc_status":14}", maybe retrying the RPC 2023-03-23 02:02:29.835752: W tensorflow/core/distributed_runtime/rpc/grpc_remote_master.cc:157] RPC failed with status = "UNAVAILABLE: Connection reset by peer" and grpc_error_string = "{"created":"@1679536949.835176851","description":"Error received from peer ipv4:127.0.0.1:51011","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Connection reset by peer","grpc_status":14}", maybe retrying the RPC Traceback (most recent call last): File "xla_spawn.py", line 83, in <module> main() File "xla_spawn.py", line 79, in main xmp.spawn(mod._mp_fn, args=(), nprocs=args.num_cores) File "/home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 397, in spawn result = torch.multiprocessing.start_processes( File "/home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 197, in start_processes while not context.join(): File "/home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 149, in join raise ProcessExitedException( torch.multiprocessing.spawn.ProcessExitedException: process 1 terminated with exit code 17 /home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 6 leaked semaphore objects to clean up at shutdown warnings.warn('resource_tracker: There appear to be %d ' ``` ### Expected behavior From my understanding, the model was supposed to be split loaded onto the TPU cores, along with whatever `full_shard` entails, but it doesn't seem to be happening.
[ 18 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "PyTorch FSDP" ]
https://api.github.com/repos/huggingface/transformers/issues/24936
TITLE Add support for Llama-2-70b-chat-hf in transformers COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### Model description Not sure if it is a bug, or that it is intentionally not supported yet. In either case: there have been 0 confirmations of people being able to successfully run the official **Llama-2-70b-chat-hf** model in transformers. ### Open source status - [ ] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation Official model weights: https://huggingface.co/meta-llama/Llama-2-70b-chat-hf Related open bug: https://github.com/facebookresearch/llama/issues/423
[ 20 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ "New model" ]
https://api.github.com/repos/huggingface/transformers/issues/22257
TITLE Ernie-M for pretraining multilingual models COMMENTS 4 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### Feature request Two things that might help in that regard: - To train TSDAE, one needs support as class ErnieMForPreTraining, just as for Ernie https://huggingface.co/docs/transformers/model_doc/ernie#transformers.ErnieForPreTraining - To train cross-encoders with contrastive loss, a bit like SimCSE, one needs standard support for getting the 'attention_mask' out of the tokenizer sbert uses. Sbert just expects those. Tried to hack it in into sbert, but failed. ### Motivation Suspect getting Ernie-M-large for pretraining multilingual sentence embeddings will yield close to sota results. According to mSimCSE, we can get top multilingual embeddings just on training on their 300k dataset of english pairs, alone (worked better than cross-lingual training). With a stronger base model (they used xlm-roberta), sota embeddings might just lie on the streets. https://github.com/yaushian/mSimCSE ### Your contribution Can't do it alone, plz help.
[ 20, 19 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ "New model", "Feature request" ]
https://api.github.com/repos/huggingface/transformers/issues/25040
TITLE Add ViTMatte model COMMENTS 0 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### Model description ViTMatte is a recently released model for alpha matting on images i.e. background removal. The model accepts an input image and trimap (manually labelled grayscale image outlining the rough border of the foreground object) and predicts the alpha mate for each pixel. It introduces a series of small adaptations to the ViT architecture - selective global attention + window attention; adding convolutional blocks between transformers blocks - to reduce computational complexity and enhancing the high-frequency information passed through the network. At the time of publishing, ViTMatte showed SOTA performance on Distinctions-646 and strong performance (> Mattformer) on Composition-1K. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation Github: https://github.com/hustvl/ViTMatte Paper: https://arxiv.org/pdf/2305.15272.pdf Demo: https://colab.research.google.com/drive/1Dc2qoJueNZQyrTU19sIcrPyRDmvuMTF3?usp=sharing
[ 20, 14 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ "New model", "Vision" ]
https://api.github.com/repos/huggingface/transformers/issues/24675
TITLE Bump grpcio from 1.44.0 to 1.53.0 in /examples/research_projects/decision_transformer COMMENTS 2 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY Bumps [grpcio](https://github.com/grpc/grpc) from 1.44.0 to 1.53.0. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/grpc/grpc/releases">grpcio's releases</a>.</em></p> <blockquote> <h2>Release v1.53.0</h2> <p>This is release 1.53.0 (<a href="https://github.com/grpc/grpc/blob/master/doc/g_stands_for.md">glockenspiel</a>) of gRPC Core.</p> <p>For gRPC documentation, see <a href="https://grpc.io/">grpc.io</a>. For previous releases, see <a href="https://github.com/grpc/grpc/releases">Releases</a>.</p> <p>This release contains refinements, improvements, and bug fixes, with highlights listed below.</p> <h2>Core</h2> <ul> <li>xDS: fix crash when removing the last endpoint from the last locality in weighted_target. (<a href="https://redirect.github.com/grpc/grpc/pull/32592">#32592</a>)</li> <li>filter stack: pass peer name up via recv_initial_metadata batch. (<a href="https://redirect.github.com/grpc/grpc/pull/31933">#31933</a>)</li> <li>[EventEngine] Add advice against blocking work in callbacks. (<a href="https://redirect.github.com/grpc/grpc/pull/32397">#32397</a>)</li> <li>[http2] Dont drop connections on metadata limit exceeded. (<a href="https://redirect.github.com/grpc/grpc/pull/32309">#32309</a>)</li> <li>xDS: reject aggregate cluster with empty cluster list. (<a href="https://redirect.github.com/grpc/grpc/pull/32238">#32238</a>)</li> <li>Fix Python epoll1 Fork Support. (<a href="https://redirect.github.com/grpc/grpc/pull/32196">#32196</a>)</li> <li>server: introduce ServerMetricRecorder API and move per-call reporting from a C++ interceptor to a C-core filter. (<a href="https://redirect.github.com/grpc/grpc/pull/32106">#32106</a>)</li> <li>[EventEngine] Add invalid handle types to the public API. (<a href="https://redirect.github.com/grpc/grpc/pull/32202">#32202</a>)</li> <li>[EventEngine] Refactoring the EventEngine Test Suite: Part 1. (<a href="https://redirect.github.com/grpc/grpc/pull/32127">#32127</a>)</li> <li>xDS: fix WeightedClusters total weight handling. (<a href="https://redirect.github.com/grpc/grpc/pull/32134">#32134</a>)</li> </ul> <h2>C++</h2> <ul> <li>Update minimum MSVC version to 2019. (<a href="https://redirect.github.com/grpc/grpc/pull/32615">#32615</a>)</li> <li>Use CMake variables for paths in pkg-config files. (<a href="https://redirect.github.com/grpc/grpc/pull/31671">#31671</a>)</li> </ul> <h2>C#</h2> <ul> <li>Grpc.Tools: Use x86 protoc binaries on arm64 Windows. (<a href="https://redirect.github.com/grpc/grpc/pull/32017">#32017</a>)</li> </ul> <h2>Python</h2> <ul> <li>Support python 3.11 on aarch64. (<a href="https://redirect.github.com/grpc/grpc/pull/32270">#32270</a>)</li> <li>Include .pyi file. (<a href="https://redirect.github.com/grpc/grpc/pull/32268">#32268</a>)</li> <li>De-experimentalize wait-for-ready. (<a href="https://redirect.github.com/grpc/grpc/pull/32143">#32143</a>)</li> <li>De-experimentalize compression. (<a href="https://redirect.github.com/grpc/grpc/pull/32138">#32138</a>)</li> </ul> <h2>Ruby</h2> <ul> <li>[ruby]: add pre-compiled binaries for ruby 3.2; drop them for ruby 2.6. (<a href="https://redirect.github.com/grpc/grpc/pull/32089">#32089</a>)</li> </ul> <h2>Release v1.53.0-pre2</h2> <p>This is a prerelease of gRPC Core 1.53.0 (glockenspiel).</p> <p>For gRPC documentation, see <a href="https://grpc.io/">grpc.io</a>. For previous releases, see <a href="https://github.com/grpc/grpc/releases">Releases</a>.</p> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/grpc/grpc/blob/master/doc/grpc_release_schedule.md">grpcio's changelog</a>.</em></p> <blockquote> <h1>gRPC Release Schedule</h1> <p>Below is the release schedule for gRPC <a href="https://github.com/grpc/grpc-java/releases">Java</a>, <a href="https://github.com/grpc/grpc-go/releases">Go</a> and <a href="https://github.com/grpc/grpc/releases">Core</a> and its dependent languages C++, C#, Objective-C, PHP, Python and Ruby.</p> <p>Releases are scheduled every six weeks on Tuesdays on a best effort basis. In some unavoidable situations a release may be delayed or released early or a language may skip a release altogether and do the next release to catch up with other languages. See the past releases in the links above. A six-week cycle gives us a good balance between delivering new features/fixes quickly and keeping the release overhead low.</p> <p>The gRPC release support policy can be found <a href="https://grpc.io/docs/what-is-grpc/faq/#how-long-are-grpc-releases-supported-for">here</a>.</p> <p>Releases are cut from release branches. For Core and Java repos, the release branch is cut two weeks before the scheduled release date. For Go, the branch is cut just before the release. An RC (release candidate) is published for Core and its dependent languages just after the branch cut. This RC is later promoted to release version if no further changes are made to the release branch. We do our best to keep head of master branch stable at all times regardless of release schedule. Daily build packages from master branch for C#, PHP, Python, Ruby and Protoc plugins are published on <a href="https://packages.grpc.io/">packages.grpc.io</a>. If you depend on gRPC in production we recommend to set up your CI system to test the RCs and, if possible, the daily builds.</p> <p>Names of gRPC releases are <a href="https://github.com/grpc/grpc/blob/master/doc/g_stands_for.md">here</a>.</p> <table> <thead> <tr> <th>Release</th> <th>Scheduled Branch Cut</th> <th>Scheduled Release Date</th> </tr> </thead> <tbody> <tr> <td>v1.17.0</td> <td>Nov 19, 2018</td> <td>Dec 4, 2018</td> </tr> <tr> <td>v1.18.0</td> <td>Jan 2, 2019</td> <td>Jan 15, 2019</td> </tr> <tr> <td>v1.19.0</td> <td>Feb 12, 2019</td> <td>Feb 26, 2019</td> </tr> <tr> <td>v1.20.0</td> <td>Mar 26, 2019</td> <td>Apr 9, 2019</td> </tr> <tr> <td>v1.21.0</td> <td>May 7, 2019</td> <td>May 21, 2019</td> </tr> <tr> <td>v1.22.0</td> <td>Jun 18, 2019</td> <td>Jul 2, 2019</td> </tr> <tr> <td>v1.23.0</td> <td>Jul 30, 2019</td> <td>Aug 13, 2019</td> </tr> <tr> <td>v1.24.0</td> <td>Sept 10, 2019</td> <td>Sept 24, 2019</td> </tr> <tr> <td>v1.25.0</td> <td>Oct 22, 2019</td> <td>Nov 5, 2019</td> </tr> <tr> <td>v1.26.0</td> <td>Dec 3, 2019</td> <td>Dec 17, 2019</td> </tr> <tr> <td>v1.27.0</td> <td>Jan 14, 2020</td> <td>Jan 28, 2020</td> </tr> <tr> <td>v1.28.0</td> <td>Feb 25, 2020</td> <td>Mar 10, 2020</td> </tr> <tr> <td>v1.29.0</td> <td>Apr 7, 2020</td> <td>Apr 21, 2020</td> </tr> <tr> <td>v1.30.0</td> <td>May 19, 2020</td> <td>Jun 2, 2020</td> </tr> <tr> <td>v1.31.0</td> <td>Jul 14, 2020</td> <td>Jul 28, 2020</td> </tr> <tr> <td>v1.32.0</td> <td>Aug 25, 2020</td> <td>Sep 8, 2020</td> </tr> <tr> <td>v1.33.0</td> <td>Oct 6, 2020</td> <td>Oct 20, 2020</td> </tr> <tr> <td>v1.34.0</td> <td>Nov 17, 2020</td> <td>Dec 1, 2020</td> </tr> <tr> <td>v1.35.0</td> <td>Dec 29, 2020</td> <td>Jan 12, 2021</td> </tr> <tr> <td>v1.36.0</td> <td>Feb 9, 2021</td> <td>Feb 23, 2021</td> </tr> <tr> <td>v1.37.0</td> <td>Mar 23, 2021</td> <td>Apr 6, 2021</td> </tr> <tr> <td>v1.38.0</td> <td>May 4, 2021</td> <td>May 18, 2021</td> </tr> <tr> <td>v1.39.0</td> <td>Jun 15, 2021</td> <td>Jun 29, 2021</td> </tr> <tr> <td>v1.40.0</td> <td>Jul 27, 2021</td> <td>Aug 10, 2021</td> </tr> <tr> <td>v1.41.0</td> <td>Sep 7, 2021</td> <td>Sep 21, 2021</td> </tr> <tr> <td>v1.42.0</td> <td>Oct 19, 2021</td> <td>Nov 2, 2021</td> </tr> <tr> <td>v1.43.0</td> <td>Nov 30, 2021</td> <td>Dec 14, 2021</td> </tr> </tbody> </table> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/grpc/grpc/commit/358bfb581feeda5bf17dd3b96da1074d84a6ef8d"><code>358bfb5</code></a> Bump version to 1.53.0 (<a href="https://redirect.github.com/grpc/grpc/issues/32685">#32685</a>)</li> <li><a href="https://github.com/grpc/grpc/commit/6e1ebe76d87a2e9b643c08b3e234d374edcd9e92"><code>6e1ebe7</code></a> Backport: Ensure compatibility with the new custom kokoro win2019 image (<a href="https://redirect.github.com/grpc/grpc/issues/326">#326</a>...</li> <li><a href="https://github.com/grpc/grpc/commit/44a77f6e911b95e1bc2c909b348123b2da2c4375"><code>44a77f6</code></a> Backport 1.53: Update minimum MSVC version to 2019 (<a href="https://redirect.github.com/grpc/grpc/issues/32615">#32615</a>)</li> <li><a href="https://github.com/grpc/grpc/commit/c11153cb4ef01ca5f83304b2e28edd0182b3c0d0"><code>c11153c</code></a> backport to 1.53: xDS: fix crash when removing the last endpoint from the las...</li> <li><a href="https://github.com/grpc/grpc/commit/7c7712a6b08ebf1bdc18fc43dc871b47b3dffe97"><code>7c7712a</code></a> Bump version to 1.53.0-pre2. (<a href="https://redirect.github.com/grpc/grpc/issues/32545">#32545</a>)</li> <li><a href="https://github.com/grpc/grpc/commit/a4017dc45e342064722a36181ed14e6d7b469d29"><code>a4017dc</code></a> backport to 1.53: [promises] Make Poll&lt;T&gt; its own type, not a variant&lt;&gt; (<a href="https://redirect.github.com/grpc/grpc/issues/32540">#32540</a>)</li> <li><a href="https://github.com/grpc/grpc/commit/3f93c1667280e6f11a1eb35cccfb8c81c698bee5"><code>3f93c16</code></a> Fuzzer fix backport to v1.53 (<a href="https://redirect.github.com/grpc/grpc/issues/32511">#32511</a>)</li> <li><a href="https://github.com/grpc/grpc/commit/5b244b25c2b87a85781ceeecd34ce0f8e8e7e840"><code>5b244b2</code></a> Bump release version to 1.53.0-pre1 (<a href="https://redirect.github.com/grpc/grpc/issues/32428">#32428</a>)</li> <li><a href="https://github.com/grpc/grpc/commit/6589340efc39b87c94897d221eaf949213cdac87"><code>6589340</code></a> Bump core version 202302161703 (<a href="https://redirect.github.com/grpc/grpc/issues/32416">#32416</a>)</li> <li><a href="https://github.com/grpc/grpc/commit/d49e1513063e6624e08eb6f59049596178a28783"><code>d49e151</code></a> [backoff] Add random early detection classifier (<a href="https://redirect.github.com/grpc/grpc/issues/32354">#32354</a>)</li> <li>Additional commits viewable in <a href="https://github.com/grpc/grpc/compare/v1.44.0...v1.53.0">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=grpcio&package-manager=pip&previous-version=1.44.0&new-version=1.53.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
[ 21 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ "dependencies" ]
https://api.github.com/repos/huggingface/transformers/issues/24201
TITLE Finish dataloader integration COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY # What does this PR do? Follow up to https://github.com/huggingface/transformers/pull/24028, which removes the TPU-specific dataloader bits. The `MpDeviceLoader` does already what `Trainer` was doing before, it's just wrapped: ```python class MpDeviceLoader(object): """Wraps an existing PyTorch DataLoader with background data upload. This class should only be using with multi-processing data parallelism. Args: loader (:class:`torch.utils.data.DataLoader`): The PyTorch DataLoader to be wrapped. device (`torch.device`...): The device where the data has to be sent. kwargs: Named arguments for the `ParallelLoader` constructor. """ def __init__(self, loader, device, **kwargs): self._loader = loader self._device = device self._parallel_loader_kwargs = kwargs def __iter__(self): parallel_loader = ParallelLoader(self._loader, [self._device], **self._parallel_loader_kwargs) return parallel_loader.per_device_loader(self._device) def __len__(self): return len(self._loader) ``` So the native Accelerate integration will work just fine Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
[ 24 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0 ]
[ "trainer" ]
https://api.github.com/repos/huggingface/transformers/issues/24038
TITLE Add VGCN-BERT model COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### Model description HI, I am the author of [VGCN-BERT paper](https://arxiv.org/abs/2004.05707), the original implementation is in my gitlab [vgcn-bert repo](https://github.com/Louis-udm/VGCN-BERT), but recently I updated the algorithm and implemented a new version for integrating in Transformer. > Much progress has been made recently on text classification with methods based on neural networks. In particular, models using attention mechanism such as BERT have shown to have the capability of capturing the contextual information within a sentence or document. However, their ability of capturing the global information about the vocabulary of a language is more limited. This latter is the strength of Graph Convolutional Networks (GCN). In this paper, we propose VGCN-BERT model which combines the capability of BERT with a Vocabulary Graph Convolutional Network (VGCN). Local information and global information interact through different layers of BERT, allowing them to influence mutually and to build together a final representation for classification. In our experiments on several text classification datasets, our approach outperforms BERT and GCN alone, and achieve higher effectiveness than that reported in previous studies. I actually finished the integration of my new version and opened the PR. This new VGCN-BERT algorithm has the following improvements: - Greatly speeds up the calculation speed of embedding vocabulary graph convolutinal network (or Word Graph embedding). Taking CoLa as an example, the new model only increases the training time by 11% compared with the base model - Updated subgraph selection algorithm. - Currently using DistilBert as the base model, but it is easy to migrate to other models. - Provide two graph construction methods in vgcn_bert/modeling_graph.py (the same NPMI statistical method as the paper, and the predefined entity-relationship mapping method) I hope that after integrating into transformers, someone can discover some more practical use case. I am ashamed to say that I have not discovered too much real use cases myself, mainly because the word-grounded graph obtained through statistical methods has limited improvement on the LLM model. I think its potential application should be when there are specific/customized graphs that need to be integrated into LLM. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation https://arxiv.org/abs/2004.05707 https://github.com/Louis-udm/VGCN-BERT
[ 20 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ "New model" ]
https://api.github.com/repos/huggingface/transformers/issues/24671
TITLE Is there any plan to add kosmos-2 to the transformers. COMMENTS 10 REACTIONS +1: 2 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### Model description Kosmos-2 is a grounded multimodal large language model, which integrates grounding and referring capabilities compared with Kosmos-1. The model can accept image regions selected by the user using bounding boxes as input, provide visual answers (i.e., bounding boxes), and ground the text output to the visual world. **Is there any plan to add this model to the transformers.** ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation Code: https://github.com/microsoft/unilm/tree/master/kosmos-2 Paper: https://arxiv.org/abs/2306.14824 Weight: the checkpoint can be downloaded from [here](https://conversationhub.blob.core.windows.net/beit-share-public/kosmos-2/kosmos-2.pt?sv=2021-10-04&st=2023-06-08T11%3A16%3A02Z&se=2033-06-09T11%3A16%3A00Z&sr=c&sp=r&sig=N4pfCVmSeq4L4tS8QbrFVsX6f6q844eft8xSuXdxU48%3D)
[ 20 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ "New model" ]
https://api.github.com/repos/huggingface/transformers/issues/24722
TITLE Feature Request: To add nested hierarchy retrieval from Donut response COMMENTS 2 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### Feature request ### Donut for hierarchy extraction (Document Parsing) While preprocessing the ground truth json to the tokens for Donut the processor function (json2token) handles nested hierarchy but the same doesn't hold true for token2json. Below is an example json: ` { "header": "This is 1st header", "elements": [ { "text_block": "This is a textblock" }, { "header": "1st nested header", "elements": [ { "text_block": "This is a sentence" }, { "text_block": "Another sentence...." }, { "itallic_header": "This is an itallic header", "elements": [ { "text_block": "Text 1 inside itallic header.." }, { "text_block": "Text 2 inside itallic header.." } ] } ] } ] } ` Consider the above json. Applying the json2token function gives the following token sequence. Function Call: `output = json2token(temp_test)` > <s_header>This is 1st header</s_header><s_elements><s_text_block>This is a textblock</s_text_block><sep/><s_header>1st nested header</s_header><s_elements><s_text_block>This is a sentence</s_text_block><sep/><s_text_block>Another sentence....</s_text_block><sep/><s_itallic_header>This is an itallic header</s_itallic_header><s_elements><s_text_block>Text 1 inside itallic header..</s_text_block><sep/><s_text_block>Text 2 inside itallic header..</s_text_block></s_elements></s_elements></s_elements> This maintains the hierarchy (like parenthesis matching). So, if donut is trained on such data it will give response which parses the information & also retains the hierarchy but the token2json function doesn't handle the conversion properly. Below is the output of the function id passed the token sequence present above. Function Call: `processor.token2json(output)` Output ` [ { 'header': 'This is 1st header', 'elements': [ { 'text_block': 'This is a textblock' }, { 'header': '1st nested header', 'text_block': 'This is a sentence' }, { 'text_block': 'Another sentence....' }, { 'itallic_header': 'This is an itallic header', 'text_block': 'Text 1 inside itallic header..' }, { 'text_block': 'Text 2 inside itallic header..' } ] } ] ` Updated Function Results (Preserving the hierarchy): ` [ { 'header': 'This is 1st header', 'elements': [ { 'text_block': 'This is a textblock' }, { 'header': '1st nested header', 'elements': [ { 'text_block': 'This is a sentence' }, { 'text_block': 'Another sentence....' }, { 'itallic_header': 'This is an itallic header', 'elements': [ { 'text_block': 'Text 1 inside itallic header..' }, { 'text_block': 'Text 2 inside itallic header..' } ] } ] } ] } ] ` Example from CORD: > temp_test = { "company": "ADVANCO COMPANY", "date": "17/01/2018", "address": "NO 1&3, JALAN WANGSA DELIMA 12, WANGSA LINK, WANGSA MAJU, 53300 KUALA LUMPUR", "total": "7.00" } Updated Function Output: ` [ { 'company': 'ADVANCO COMPANY', 'date': '17/01/2018', 'address': 'NO 1&3, JALAN WANGSA DELIMA 12, WANGSA LINK, WANGSA MAJU, 53300 KUALA LUMPUR', 'total': '7.00' } ] ` ### Motivation Found out about this while working on a project to extract information from images also maintaining the hierarchy/structure of it. Going through the CORD dataset made me realize that the data itself is not nested in nature. So, thought of testing on a sample the postprocessing logics json -> token & token -> json conversion. Updated the token2json to get the hierarchy as it is from the token but wasn't sure about the model performance on nested jsons but long story short Donut predicts the hierarchy pretty good. ### Your contribution ` def token2json(tokens, is_inner_value=False, nested_key = 'elements'): """ Convert a (generated) token seuqnce into an ordered JSON format """ output = dict() while tokens: start_token = re.search(r"<s_(.*?)>", tokens, re.IGNORECASE) if start_token is None: break key = start_token.group(1) start_matches = re.finditer(fr"<s_{key}>", tokens) end_matches = re.finditer(fr"</s_{key}>", tokens) start_tups = [(match.group(), match.start(), match.end()) for match in start_matches] end_tups = [(match.group(), match.start(), match.end()) for match in end_matches] mergeTups = start_tups + end_tups sortedMergeTups = sorted(mergeTups, key=lambda x: x[1]) # remove any unattended close tag for the key present before the current focus start key updatedIdx = -1 for idx in range(len(sortedMergeTups)): if start_token.span()[0] == sortedMergeTups[idx][1]: updatedIdx = idx break sortedMergeTups = sortedMergeTups[updatedIdx:] start_main = sortedMergeTups[0] match_tracker = 0 end_token = None if key == nested_key : if start_main[0] == f'<s_{key}>': for tup in sortedMergeTups[1:]: if tup[0] == f'</s_{key}>': if match_tracker == 0: end_token = tup break else: match_tracker -= 1 elif tup[0] == f'<s_{key}>': match_tracker += 1 elif len(sortedMergeTups) > 1: nextTup = sortedMergeTups[1] if nextTup[0] == f'</s_{key}>': end_token = nextTup if end_token is None: tokens = tokens.replace(start_token[0], "", 1) else: start_token_word = start_main[0] start_token_id = start_main[2] end_token_word = end_token[0] end_token_id = end_token[1] content = tokens[start_token_id: end_token_id] if content is not None: if r"<s_" in content and r"</s_" in content: # non-leaf node value = token2json(content, is_inner_value=True) if value: if len(value) == 1: value = value[0] output[key] = value else: # leaf nodes if key in output.keys(): if isinstance(output[key], str): tempVal = output[key] output[key] = [tempVal] else: output[key] = [] for leaf in content.split(r"<sep/>"): leaf = leaf.strip() if ( leaf in processor.tokenizer.get_added_vocab() and leaf[0] == "<" and leaf[-2:] == "/>" ): leaf = leaf[1:-2] # for categorical special tokens output[key].append(leaf) if len(output[key]) == 1: output[key] = output[key][0] tokens = tokens[end_token[2]:] if tokens[:6] == r"<sep/>": # non-leaf nodes return [output] + token2json(tokens[6:], is_inner_value=True) if len(output): return [output] if is_inner_value else output else: return [] if is_inner_value else {"text_sequence": tokens} `
[ 19 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Feature request" ]
https://api.github.com/repos/huggingface/transformers/issues/23906
TITLE Move import check to before state reset COMMENTS 5 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY # What does this PR do? This PR moves the reset of the `AcceleratorState` to be under the check for if it is available, and guards the resetting better. Fixes # (issue) Fixes https://github.com/huggingface/transformers/issues/23898 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
[ 27 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1 ]
[ "bug" ]
https://api.github.com/repos/huggingface/transformers/issues/23673
TITLE Bump requests from 2.27.1 to 2.31.0 in /examples/research_projects/decision_transformer COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY Bumps [requests](https://github.com/psf/requests) from 2.27.1 to 2.31.0. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/psf/requests/releases">requests's releases</a>.</em></p> <blockquote> <h2>v2.31.0</h2> <h2>2.31.0 (2023-05-22)</h2> <p><strong>Security</strong></p> <ul> <li> <p>Versions of Requests between v2.3.0 and v2.30.0 are vulnerable to potential forwarding of <code>Proxy-Authorization</code> headers to destination servers when following HTTPS redirects.</p> <p>When proxies are defined with user info (<a href="https://user:pass@proxy:8080">https://user:pass@proxy:8080</a>), Requests will construct a <code>Proxy-Authorization</code> header that is attached to the request to authenticate with the proxy.</p> <p>In cases where Requests receives a redirect response, it previously reattached the <code>Proxy-Authorization</code> header incorrectly, resulting in the value being sent through the tunneled connection to the destination server. Users who rely on defining their proxy credentials in the URL are <em>strongly</em> encouraged to upgrade to Requests 2.31.0+ to prevent unintentional leakage and rotate their proxy credentials once the change has been fully deployed.</p> <p>Users who do not use a proxy or do not supply their proxy credentials through the user information portion of their proxy URL are not subject to this vulnerability.</p> <p>Full details can be read in our <a href="https://github.com/psf/requests/security/advisories/GHSA-j8r2-6x86-q33q">Github Security Advisory</a> and <a href="https://nvd.nist.gov/vuln/detail/CVE-2023-32681">CVE-2023-32681</a>.</p> </li> </ul> <h2>v2.30.0</h2> <h2>2.30.0 (2023-05-03)</h2> <p><strong>Dependencies</strong></p> <ul> <li> <p>⚠️ Added support for urllib3 2.0. ⚠️</p> <p>This may contain minor breaking changes so we advise careful testing and reviewing <a href="https://urllib3.readthedocs.io/en/latest/v2-migration-guide.html">https://urllib3.readthedocs.io/en/latest/v2-migration-guide.html</a> prior to upgrading.</p> <p>Users who wish to stay on urllib3 1.x can pin to <code>urllib3&lt;2</code>.</p> </li> </ul> <h2>v2.29.0</h2> <h2>2.29.0 (2023-04-26)</h2> <p><strong>Improvements</strong></p> <ul> <li>Requests now defers chunked requests to the urllib3 implementation to improve standardization. (<a href="https://redirect.github.com/psf/requests/issues/6226">#6226</a>)</li> <li>Requests relaxes header component requirements to support bytes/str subclasses. (<a href="https://redirect.github.com/psf/requests/issues/6356">#6356</a>)</li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/psf/requests/blob/main/HISTORY.md">requests's changelog</a>.</em></p> <blockquote> <h2>2.31.0 (2023-05-22)</h2> <p><strong>Security</strong></p> <ul> <li> <p>Versions of Requests between v2.3.0 and v2.30.0 are vulnerable to potential forwarding of <code>Proxy-Authorization</code> headers to destination servers when following HTTPS redirects.</p> <p>When proxies are defined with user info (<a href="https://user:pass@proxy:8080">https://user:pass@proxy:8080</a>), Requests will construct a <code>Proxy-Authorization</code> header that is attached to the request to authenticate with the proxy.</p> <p>In cases where Requests receives a redirect response, it previously reattached the <code>Proxy-Authorization</code> header incorrectly, resulting in the value being sent through the tunneled connection to the destination server. Users who rely on defining their proxy credentials in the URL are <em>strongly</em> encouraged to upgrade to Requests 2.31.0+ to prevent unintentional leakage and rotate their proxy credentials once the change has been fully deployed.</p> <p>Users who do not use a proxy or do not supply their proxy credentials through the user information portion of their proxy URL are not subject to this vulnerability.</p> <p>Full details can be read in our <a href="https://github.com/psf/requests/security/advisories/GHSA-j8r2-6x86-q33q">Github Security Advisory</a> and <a href="https://nvd.nist.gov/vuln/detail/CVE-2023-32681">CVE-2023-32681</a>.</p> </li> </ul> <h2>2.30.0 (2023-05-03)</h2> <p><strong>Dependencies</strong></p> <ul> <li> <p>⚠️ Added support for urllib3 2.0. ⚠️</p> <p>This may contain minor breaking changes so we advise careful testing and reviewing <a href="https://urllib3.readthedocs.io/en/latest/v2-migration-guide.html">https://urllib3.readthedocs.io/en/latest/v2-migration-guide.html</a> prior to upgrading.</p> <p>Users who wish to stay on urllib3 1.x can pin to <code>urllib3&lt;2</code>.</p> </li> </ul> <h2>2.29.0 (2023-04-26)</h2> <p><strong>Improvements</strong></p> <ul> <li>Requests now defers chunked requests to the urllib3 implementation to improve standardization. (<a href="https://redirect.github.com/psf/requests/issues/6226">#6226</a>)</li> <li>Requests relaxes header component requirements to support bytes/str subclasses. (<a href="https://redirect.github.com/psf/requests/issues/6356">#6356</a>)</li> </ul> <h2>2.28.2 (2023-01-12)</h2> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/psf/requests/commit/147c8511ddbfa5e8f71bbf5c18ede0c4ceb3bba4"><code>147c851</code></a> v2.31.0</li> <li><a href="https://github.com/psf/requests/commit/74ea7cf7a6a27a4eeb2ae24e162bcc942a6706d5"><code>74ea7cf</code></a> Merge pull request from GHSA-j8r2-6x86-q33q</li> <li><a href="https://github.com/psf/requests/commit/302225334678490ec66b3614a9dddb8a02c5f4fe"><code>3022253</code></a> test on pypy 3.8 and pypy 3.9 on windows and macos (<a href="https://redirect.github.com/psf/requests/issues/6424">#6424</a>)</li> <li><a href="https://github.com/psf/requests/commit/b639e66c816514e40604d46f0088fbceec1a5149"><code>b639e66</code></a> test on py3.12 (<a href="https://redirect.github.com/psf/requests/issues/6448">#6448</a>)</li> <li><a href="https://github.com/psf/requests/commit/d3d504436ef0c2ac7ec8af13738b04dcc8c694be"><code>d3d5044</code></a> Fixed a small typo (<a href="https://redirect.github.com/psf/requests/issues/6452">#6452</a>)</li> <li><a href="https://github.com/psf/requests/commit/2ad18e0e10e7d7ecd5384c378f25ec8821a10a29"><code>2ad18e0</code></a> v2.30.0</li> <li><a href="https://github.com/psf/requests/commit/f2629e9e3c7ce3c3c8c025bcd8db551101cbc773"><code>f2629e9</code></a> Remove strict parameter (<a href="https://redirect.github.com/psf/requests/issues/6434">#6434</a>)</li> <li><a href="https://github.com/psf/requests/commit/87d63de8739263bbe17034fba2285c79780da7e8"><code>87d63de</code></a> v2.29.0</li> <li><a href="https://github.com/psf/requests/commit/51716c4ef390136b0d4b800ec7665dd5503e64fc"><code>51716c4</code></a> enable the warnings plugin (<a href="https://redirect.github.com/psf/requests/issues/6416">#6416</a>)</li> <li><a href="https://github.com/psf/requests/commit/a7da1ab3498b10ec3a3582244c94b2845f8a8e71"><code>a7da1ab</code></a> try on ubuntu 22.04 (<a href="https://redirect.github.com/psf/requests/issues/6418">#6418</a>)</li> <li>Additional commits viewable in <a href="https://github.com/psf/requests/compare/v2.27.1...v2.31.0">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=requests&package-manager=pip&previous-version=2.27.1&new-version=2.31.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
[ 21 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ "dependencies" ]
https://api.github.com/repos/huggingface/transformers/issues/23845
TITLE forced_decoder_ids in Whisper models significantly impacts performance, use decoder_input_ids instead COMMENTS 8 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### Feature request @ArthurZucker probably one for you based on commit logs. Using `forced_decoder_ids` to provide "prompt" and or "prefix" to the whisper model is very inefficient as a forward pass and sampling is done for each token in the `forced_decoder_ids` but the result is already known. Instead the model parameter `decoder_input_ids` could be used which only uses one forward pass to initialise the kv cache with all the input tokens and immediately is sampling useful next tokens. Openai's whisper limits prompt to half the context length (448 // 2 - 1 = 223) , so if you want to use transformers whisper to behave like openai's whisper and you expect 20 words + EOS in your input feature then forward pass counts are: - transformers: 244 - openai-whisper: 21 I'm raising this as a feature request rather than a bug or PR as I think `forced_decoder_ids` is already pretty well embedded in the code and the community so I assume it can't just be ripped out and a discussion is probably required before a PR. Here's some code that demonstrates the issue in IPython: ```python from transformers import ( WhisperForConditionalGeneration, WhisperTokenizerFast, WhisperFeatureExtractor, ) from datasets import load_dataset import torch feature_extractor = WhisperFeatureExtractor() tokenizer = WhisperTokenizerFast.from_pretrained("openai/whisper-tiny.en", language="english") # Patch WhisperForConditionalGeneration._prepare_decoder_input_ids_for_generation because the one on GenerationMixin doesn't handle whisper properly. def prepare_decoder_input_ids_for_generation_patch(self, batch_size, model_input_name, model_kwargs, decoder_start_token_id, bos_token_id, device): if 'decoder_input_ids' not in model_kwargs: return torch.ones((batch_size, 1), dtype=torch.long, device=device) * decoder_start_token_id, model_kwargs else: return model_kwargs.pop('decoder_input_ids'), model_kwargs WhisperForConditionalGeneration._prepare_decoder_input_ids_for_generation = prepare_decoder_input_ids_for_generation_patch model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny.en") audio = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")[3]["audio"]["array"] input_features = feature_extractor(audio, sampling_rate=16000, return_tensors="pt").input_features # A custom logits processor to show how many times the forward pass and sample are run def logits_processor_count_factory(): count = 0 def logits_processor_count(input_ids, scores): nonlocal count count += 1 print(count) return scores return logits_processor_count PREV_TOKEN = 50360 # <|startofprev|> prompt_tokens = [PREV_TOKEN, 1770, 13, 2264, 346, 353, 318, 262, 46329, 286, 262, 3504, 6097, 11, 290, 356, 389, 9675, 284, 7062, 465, 21443, 13, 5414, 318, 1770, 13, 2264, 346, 353, 338, 5642, 1342, 3499, 621, 465, 2300, 13, 679, 4952, 514, 326, 379, 428, 43856, 1622, 286, 262, 614, 11, 351, 6786, 290, 32595, 12023, 28236, 878, 514, 11, 985, 2915, 7428, 422, 6600, 290, 663, 2482, 3051, 749, 14704, 284, 262, 2000, 13] # note prompt_ids is prefixed to forced_decoder_ids inside generate # counts to 106 forced_decoder_ids_output = model.generate(input_features=input_features, return_timestamps=False, prompt_ids=torch.LongTensor(prompt_tokens), logits_processor=[logits_processor_count_factory()])[0] print(tokenizer.decode(forced_decoder_ids_output, decode_with_timestamps=False)) SOT_TOKEN = 50257 # <|startoftranscript|> NO_TIMESTAMPS_TOKEN = 50362 # <|notimestamps|> decoder_input_ids = torch.LongTensor([prompt_tokens + [SOT_TOKEN, NO_TIMESTAMPS_TOKEN]]) # counts to 31 decoder_input_ids_output = model.generate(input_features=input_features, return_timestamps=False, forced_decoder_ids=None, begin_suppress_tokens=None, decoder_input_ids=decoder_input_ids, logits_processor=[logits_processor_count_factory()])[0] print(tokenizer.decode(decoder_input_ids_output, decode_with_timestamps=False)) ``` You can get performance for bothing in IPython doing: ```python %timeit model.generate(input_features=input_features, return_timestamps=False, prompt_ids=torch.LongTensor(prompt_tokens))[0] %timeit model.generate(input_features=input_features, return_timestamps=False, forced_decoder_ids=None, begin_suppress_tokens=None, decoder_input_ids=decoder_input_ids)[0] ``` On CPU for me using decoder_input_ids is 2x faster with this input. ### Motivation I want to be able to use the transformers implementation of whisper in a production system where cost and processing time will be critical, due to the way we are using whisper this issue impact performance a lot more than the 2x I quoted above, its more like 5x in our use case. Obviously we can code around it but if it's possible to change transformers and avoid custom code I'd prefer that. ### Your contribution I'd be able to create a PR but without knowing more about how the maintainers would like to handle backward compatibility etc I don't think its the right place to start. I'd be very happy to be involved in a discussion, offer opinions or testing etc.
[ 12 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Good Difficult Issue" ]
https://api.github.com/repos/huggingface/transformers/issues/22372
TITLE Add Restormer COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### Model description **Restormer: Efficient Transformer for High-Resolution Image Restoration** was published in CVPR 2022, which introduced a new Vision Transformer based architecture for Image Restoration tasks like Deraining, Motion Deblurring, Defocus Deblurring and Denoising. It reduced the time complexity of Self Attention in Vision Transformers from O(n<sup>2</sup>) to O(n) by introducing **Multi-Dconv Head Transposed Attention**. It also introduced **Gated-Dconv Feed-Forward Network**. @manyana72 and I would like to add this model to Huggingface. cc: @NielsRogge ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation [Paper](https://arxiv.org/pdf/2111.09881.pdf), [Code Implementation](https://github.com/swz30/Restormer) and [pretrained model weights](https://github.com/swz30/Restormer/releases/tag/v1.0)
[ 20 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ "New model" ]
https://api.github.com/repos/huggingface/transformers/issues/22160
TITLE [i18n-it] Translating docs to it COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY <!-- Note: Please search to see if an issue already exists for the language you are trying to translate. --> Hi! Let's bring the documentation to all the <languageName>-speaking community 🌐 (currently 0 out of 267 complete) Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list. Some notes: * Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗). * Please translate in a gender-neutral way. * Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source). * Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml). * Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @ArthurZucker, @sgugger for review. * 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/). ## Get Started section - [ ] [index.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.mdx) https://github.com/huggingface/transformers/pull/20180 - [ ] [quicktour.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.mdx) (waiting for initial PR to go through) - [ ] [installation.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.mdx). ## Tutorial section - [ ] [pipeline_tutorial.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.mdx) - [ ] [autoclass_tutorial.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.mdx) - [ ] [preprocessing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.mdx) - [ ] [training.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.mdx) - [ ] [accelerate.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.mdx) - [ ] [model_sharing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.mdx) - [ ] [multilingual.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.mdx) <!-- Keep on adding more as you go 🔥 --> ## How-to guides - [] [perf_infer_gpu_one](https://github.com/huggingface/transformers/blob/main/docs/source/en/perf_infer_gpu_one.mdx)
[ 8 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "WIP" ]
https://api.github.com/repos/huggingface/transformers/issues/22410
TITLE Bump redis from 4.1.4 to 4.5.3 in /examples/research_projects/decision_transformer COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY Bumps [redis](https://github.com/redis/redis-py) from 4.1.4 to 4.5.3. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/redis/redis-py/releases">redis's releases</a>.</em></p> <blockquote> <h2>4.5.3</h2> <h1>Changes</h1> <p>Update urgency: HIGH: There is a critical bug that may affect a subset of users. Upgrade!</p> <h2>🐛 Bug Fixes</h2> <ul> <li><a href="https://cwe.mitre.org/data/definitions/404.html">CWE-404</a> AsyncIO Race Condition Fix (<a href="https://redirect.github.com/redis/redis-py/issues/2624">#2624</a>, <a href="https://redirect.github.com/redis/redis-py/issues/2579">#2579</a>)</li> </ul> <h2>4.5.2</h2> <h1>Changes</h1> <h2>🚀 New Features</h2> <ul> <li>Introduce AbstractConnection so that UnixDomainSocketConnection can call super().<strong>init</strong> (<a href="https://redirect.github.com/redis/redis-py/issues/2588">#2588</a>)</li> <li>Added queue_class to REDIS_ALLOWED_KEYS (<a href="https://redirect.github.com/redis/redis-py/issues/2577">#2577</a>)</li> <li>Made search document subscriptable (<a href="https://redirect.github.com/redis/redis-py/issues/2615">#2615</a>)</li> <li>Sped up the protocol parsing (<a href="https://redirect.github.com/redis/redis-py/issues/2596">#2596</a>)</li> </ul> <h2>🐛 Bug Fixes</h2> <ul> <li>Fix behaviour of async PythonParser to match RedisParser as for issue <a href="https://redirect.github.com/redis/redis-py/issues/2349">#2349</a> (<a href="https://redirect.github.com/redis/redis-py/issues/2582">#2582</a>)</li> <li>Replace async_timeout by asyncio.timeout (<a href="https://redirect.github.com/redis/redis-py/issues/2602">#2602</a>)</li> <li>Update json().arrindex() default values (<a href="https://redirect.github.com/redis/redis-py/issues/2611">#2611</a>)</li> </ul> <h2>🧰 Maintenance</h2> <ul> <li>Coverage for pypy-3.9 (<a href="https://redirect.github.com/redis/redis-py/issues/2608">#2608</a>)</li> <li>Developer Experience: Adding redis version compatibility details to the README (<a href="https://redirect.github.com/redis/redis-py/issues/2621">#2621</a>)</li> <li>Remove redundant assignment to RedisCluster.nodes_manager. (<a href="https://redirect.github.com/redis/redis-py/issues/2620">#2620</a>)</li> <li>Developer Experience: [types] update return type of smismember to list[int] (<a href="https://redirect.github.com/redis/redis-py/issues/2617">#2617</a>)</li> <li>Developer Experience: [docs] ConnectionPool SSL example (<a href="https://redirect.github.com/redis/redis-py/issues/2605">#2605</a>)</li> <li>Developer Experience: Fixed CredentialsProvider examples (<a href="https://redirect.github.com/redis/redis-py/issues/2587">#2587</a>)</li> <li>Developer Experience: Update README to make pip install copy-pastable on zsh (<a href="https://redirect.github.com/redis/redis-py/issues/2584">#2584</a>)</li> <li>Developer Experience: Fix for <code>lpop</code> and <code>rpop</code> return typing (<a href="https://redirect.github.com/redis/redis-py/issues/2590">#2590</a>)</li> </ul> <h2>Contributors</h2> <p>We'd like to thank all the contributors who worked on this release!</p> <p><a href="https://github.com/CrimsonGlory"><code>@​CrimsonGlory</code></a>, <a href="https://github.com/Galtozzy"><code>@​Galtozzy</code></a>, <a href="https://github.com/aksinha334"><code>@​aksinha334</code></a>, <a href="https://github.com/barshaul"><code>@​barshaul</code></a>, <a href="https://github.com/chayim"><code>@​chayim</code></a>, <a href="https://github.com/davemcphee"><code>@​davemcphee</code></a>, <a href="https://github.com/dvora-h"><code>@​dvora-h</code></a>, <a href="https://github.com/kristjanvalur"><code>@​kristjanvalur</code></a>, <a href="https://github.com/ryin1"><code>@​ryin1</code></a>, <a href="https://github.com/sileht"><code>@​sileht</code></a>, <a href="https://github.com/thebarbershop"><code>@​thebarbershop</code></a>, <a href="https://github.com/uglide"><code>@​uglide</code></a>, <a href="https://github.com/woutdenolf"><code>@​woutdenolf</code></a> and <a href="https://github.com/zakaf"><code>@​zakaf</code></a></p> <h2>4.5.1</h2> <h1>Changes</h1> <h2>🐛 Bug Fixes</h2> <ul> <li>Fix <a href="https://redirect.github.com/redis/redis-py/issues/2581">#2581</a> <code>UnixDomainSocketConnection</code> object has no attribute <code>_command_packer</code> (<a href="https://redirect.github.com/redis/redis-py/issues/2583">#2583</a>)</li> </ul> <h2>Contributors</h2> <p>We'd like to thank all the contributors who worked on this release!</p> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/redis/redis-py/commit/66a4d6b2a493dd3a20cc299ab5fef3c14baad965"><code>66a4d6b</code></a> AsyncIO Race Condition Fix (<a href="https://redirect.github.com/redis/redis-py/issues/2641">#2641</a>)</li> <li><a href="https://github.com/redis/redis-py/commit/318b114f4da9846a2a7c150e1fb702e9bebd9fdf"><code>318b114</code></a> Version 4.5.2 (<a href="https://redirect.github.com/redis/redis-py/issues/2627">#2627</a>)</li> <li><a href="https://github.com/redis/redis-py/commit/1b2f408259405d412d7530291902f9e0c8bd34b3"><code>1b2f408</code></a> Fix behaviour of async PythonParser to match RedisParser as for issue <a href="https://redirect.github.com/redis/redis-py/issues/2349">#2349</a> (...</li> <li><a href="https://github.com/redis/redis-py/commit/7d474f90453c7b90bd06c94e0250b618120a599d"><code>7d474f9</code></a> introduce AbstractConnection so that UnixDomainSocketConnection can call supe...</li> <li><a href="https://github.com/redis/redis-py/commit/c87172347584301f453c601c483126e4800257b7"><code>c871723</code></a> pypy-3.9 CI (<a href="https://redirect.github.com/redis/redis-py/issues/2608">#2608</a>)</li> <li><a href="https://github.com/redis/redis-py/commit/d63313bf6080acaf18d61e072c78303adc0d4166"><code>d63313b</code></a> add queue_class to REDIS_ALLOWED_KEYS (<a href="https://redirect.github.com/redis/redis-py/issues/2577">#2577</a>)</li> <li><a href="https://github.com/redis/redis-py/commit/c61eeb2e3b5dff1f01eb1e665f424c7e75354f56"><code>c61eeb2</code></a> Adding supported redis/library details (<a href="https://redirect.github.com/redis/redis-py/issues/2621">#2621</a>)</li> <li><a href="https://github.com/redis/redis-py/commit/25e85e51e57b7aae9eb8fc77cfb0a45a07a501a7"><code>25e85e5</code></a> fix: replace async_timeout by asyncio.timeout (<a href="https://redirect.github.com/redis/redis-py/issues/2602">#2602</a>)</li> <li><a href="https://github.com/redis/redis-py/commit/91ab12a0f1bdf0e433131e1a51578e9fa2f89718"><code>91ab12a</code></a> Remove redundant assignment. (<a href="https://redirect.github.com/redis/redis-py/issues/2620">#2620</a>)</li> <li><a href="https://github.com/redis/redis-py/commit/8bfd492240fd33489a86cd3d353e3ece1fc94c10"><code>8bfd492</code></a> Making search document subscriptable (<a href="https://redirect.github.com/redis/redis-py/issues/2615">#2615</a>)</li> <li>Additional commits viewable in <a href="https://github.com/redis/redis-py/compare/v4.1.4...v4.5.3">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=redis&package-manager=pip&previous-version=4.1.4&new-version=4.5.3)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
[ 21 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ "dependencies" ]
https://api.github.com/repos/huggingface/transformers/issues/24842
TITLE Request support for RWKV-4-World model. COMMENTS 7 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### Model description As RWKV-4-World is using a different tokenizer and vocabs, the current RWKV support in transformers is incompatible. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation https://huggingface.co/StarRing2022/RWKV-4-World-1.5B @StarRing2022
[ 20 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ "New model" ]
https://api.github.com/repos/huggingface/transformers/issues/24699
TITLE Bump scipy from 1.8.0 to 1.10.0 in /examples/research_projects/decision_transformer COMMENTS 4 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY Bumps [scipy](https://github.com/scipy/scipy) from 1.8.0 to 1.10.0. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/scipy/scipy/releases">scipy's releases</a>.</em></p> <blockquote> <h1>SciPy 1.10.0 Release Notes</h1> <p>SciPy <code>1.10.0</code> is the culmination of <code>6</code> months of hard work. It contains many new features, numerous bug-fixes, improved test coverage and better documentation. There have been a number of deprecations and API changes in this release, which are documented below. All users are encouraged to upgrade to this release, as there are a large number of bug-fixes and optimizations. Before upgrading, we recommend that users check that their own code does not use deprecated SciPy functionality (to do so, run your code with <code>python -Wd</code> and check for <code>DeprecationWarning</code> s). Our development attention will now shift to bug-fix releases on the 1.10.x branch, and on adding new features on the main branch.</p> <p>This release requires Python <code>3.8+</code> and NumPy <code>1.19.5</code> or greater.</p> <p>For running on PyPy, PyPy3 <code>6.0+</code> is required.</p> <h1>Highlights of this release</h1> <ul> <li>A new dedicated datasets submodule (<code>scipy.datasets</code>) has been added, and is now preferred over usage of <code>scipy.misc</code> for dataset retrieval.</li> <li>A new <code>scipy.interpolate.make_smoothing_spline</code> function was added. This function constructs a smoothing cubic spline from noisy data, using the generalized cross-validation (GCV) criterion to find the tradeoff between smoothness and proximity to data points.</li> <li><code>scipy.stats</code> has three new distributions, two new hypothesis tests, three new sample statistics, a class for greater control over calculations involving covariance matrices, and many other enhancements.</li> </ul> <h1>New features</h1> <h1><code>scipy.datasets</code> introduction</h1> <ul> <li>A new dedicated <code>datasets</code> submodule has been added. The submodules is meant for datasets that are relevant to other SciPy submodules ands content (tutorials, examples, tests), as well as contain a curated set of datasets that are of wider interest. As of this release, all the datasets from <code>scipy.misc</code> have been added to <code>scipy.datasets</code> (and deprecated in <code>scipy.misc</code>).</li> <li>The submodule is based on <a href="https://www.fatiando.org/pooch/latest/">Pooch</a> (a new optional dependency for SciPy), a Python package to simplify fetching data files. This move will, in a subsequent release, facilitate SciPy to trim down the sdist/wheel sizes, by decoupling the data files and moving them out of the SciPy repository, hosting them externally and</li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/scipy/scipy/commit/dde50595862a4f9cede24b5d1c86935c30f1f88a"><code>dde5059</code></a> REL: 1.10.0 final [wheel build]</li> <li><a href="https://github.com/scipy/scipy/commit/7856f281b016c585b82d03723c4494bcdbdcd4a5"><code>7856f28</code></a> Merge pull request <a href="https://redirect.github.com/scipy/scipy/issues/17696">#17696</a> from tylerjereddy/treddy_110_final_prep</li> <li><a href="https://github.com/scipy/scipy/commit/205b6243c6d075d05695e7ac6d007e0f03bfbf42"><code>205b624</code></a> DOC: add missing author</li> <li><a href="https://github.com/scipy/scipy/commit/1ab9f1b10145f0a974d5531700e72d1fb4229b76"><code>1ab9f1b</code></a> DOC: update 1.10.0 relnotes</li> <li><a href="https://github.com/scipy/scipy/commit/ac2f45fbe1e39a8f52c1ea2e68764009f02973c0"><code>ac2f45f</code></a> MAINT: integrate._qmc_quad: mark as private with preceding underscore</li> <li><a href="https://github.com/scipy/scipy/commit/3e0ae1a21f51ebee3a77733c42700d87a0c35d7d"><code>3e0ae1a</code></a> REV: integrate.qmc_quad: delay release to SciPy 1.11.0</li> <li><a href="https://github.com/scipy/scipy/commit/34cdf05c86548de1c4ca1b2798cdc23885af807b"><code>34cdf05</code></a> MAINT: FFT pybind11 fixups</li> <li><a href="https://github.com/scipy/scipy/commit/843500aabde17aaf1eec65c589d50bd12ee35039"><code>843500a</code></a> Merge pull request <a href="https://redirect.github.com/scipy/scipy/issues/17689">#17689</a> from mdhaber/gh17686</li> <li><a href="https://github.com/scipy/scipy/commit/089924b61012a106ffa4f58939b0180124051a0b"><code>089924b</code></a> REL: integrate.qmc_quad: remove from release notes</li> <li><a href="https://github.com/scipy/scipy/commit/3e47110f10e3267d228e9da84174f3cee325e7c3"><code>3e47110</code></a> REL: 1.10.0rc3 unreleased</li> <li>Additional commits viewable in <a href="https://github.com/scipy/scipy/compare/v1.8.0...v1.10.0">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=scipy&package-manager=pip&previous-version=1.8.0&new-version=1.10.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
[ 21 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ "dependencies" ]
https://api.github.com/repos/huggingface/transformers/issues/25096
TITLE Bump certifi from 2022.12.7 to 2023.7.22 in /examples/research_projects/lxmert COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY Bumps [certifi](https://github.com/certifi/python-certifi) from 2022.12.7 to 2023.7.22. <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/certifi/python-certifi/commit/8fb96ed81f71e7097ed11bc4d9b19afd7ea5c909"><code>8fb96ed</code></a> 2023.07.22</li> <li><a href="https://github.com/certifi/python-certifi/commit/afe77220e0eaa722593fc5d294213ff5275d1b40"><code>afe7722</code></a> Bump actions/setup-python from 4.6.1 to 4.7.0 (<a href="https://redirect.github.com/certifi/python-certifi/issues/230">#230</a>)</li> <li><a href="https://github.com/certifi/python-certifi/commit/2038739ad56abec7aaddfa90ad2ce6b3ed7f5c7b"><code>2038739</code></a> Bump dessant/lock-threads from 3.0.0 to 4.0.1 (<a href="https://redirect.github.com/certifi/python-certifi/issues/229">#229</a>)</li> <li><a href="https://github.com/certifi/python-certifi/commit/44df761f4c09d19f32b3cc09208a739043a5e25b"><code>44df761</code></a> Hash pin Actions and enable dependabot (<a href="https://redirect.github.com/certifi/python-certifi/issues/228">#228</a>)</li> <li><a href="https://github.com/certifi/python-certifi/commit/8b3d7bae85bbc87c9181cc1d39548db3d31627f0"><code>8b3d7ba</code></a> 2023.05.07</li> <li><a href="https://github.com/certifi/python-certifi/commit/53da2405b1af430f6bafa21ba45d8dd8dfc726b8"><code>53da240</code></a> ci: Add Python 3.12-dev to the testing (<a href="https://redirect.github.com/certifi/python-certifi/issues/224">#224</a>)</li> <li><a href="https://github.com/certifi/python-certifi/commit/c2fc3b1f64d6946f1057971ee897ea828ae848d8"><code>c2fc3b1</code></a> Create a Security Policy (<a href="https://redirect.github.com/certifi/python-certifi/issues/222">#222</a>)</li> <li><a href="https://github.com/certifi/python-certifi/commit/c211ef482a01aff5f1bc92c4128bfa0c955f4a01"><code>c211ef4</code></a> Set up permissions to github workflows (<a href="https://redirect.github.com/certifi/python-certifi/issues/218">#218</a>)</li> <li><a href="https://github.com/certifi/python-certifi/commit/2087de5d0aa1d472145fc1dbdfece3fe652bbac5"><code>2087de5</code></a> Don't let deprecation warning fail CI (<a href="https://redirect.github.com/certifi/python-certifi/issues/219">#219</a>)</li> <li><a href="https://github.com/certifi/python-certifi/commit/e0b9fc5c8f52ac8c300da502e5760ce3d41429ec"><code>e0b9fc5</code></a> remove paragraphs about 1024-bit roots from README</li> <li>Additional commits viewable in <a href="https://github.com/certifi/python-certifi/compare/2022.12.07...2023.07.22">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=certifi&package-manager=pip&previous-version=2022.12.7&new-version=2023.7.22)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
[ 21 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ "dependencies" ]
https://api.github.com/repos/huggingface/transformers/issues/24915
TITLE ViTPose: Simple Vision Transformer Baselines for Human Pose Estimation COMMENTS 4 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### Model description ViTPose is used in 2D human pose estimation, a subset of the keypoint detection task #24044 It provides a simple baseline for vision transformer-based human pose estimation. It utilises a pretrained vision transformer backbone to extract features and a simple decoder head to process the extracted features. Despite no elaborate designs in the model, ViTPose obtained state-of-the-art (SOTA) performance of 80.9 AP on the MS COCO Keypoint test-dev set. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation Code and weights: https://github.com/ViTAE-Transformer/ViTPose Paper: https://arxiv.org/abs/2204.12484 @Annbless
[ 20 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ "New model" ]
https://api.github.com/repos/huggingface/transformers/issues/24221
TITLE Bump transformers from 4.26.0 to 4.30.0 in /examples/research_projects/vqgan-clip COMMENTS 3 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY Bumps [transformers](https://github.com/huggingface/transformers) from 4.26.0 to 4.30.0. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/huggingface/transformers/releases">transformers's releases</a>.</em></p> <blockquote> <h2>v4.30.0: 100k, Agents improvements, Safetensors core dependency, Swiftformer, Autoformer, MobileViTv2, timm-as-a-backbone</h2> <h2>100k</h2> <p>Transformers has just reached 100k stars on GitHub, and to celebrate we wanted to highlight 100 projects in the vicinity of <code>transformers</code> and we have decided to create an <a href="https://github.com/huggingface/transformers/blob/main/awesome-transformers.md">awesome-transformers</a> page to do just that.</p> <p>We accept PRs to add projects to the list!</p> <ul> <li>Top 100 by <a href="https://github.com/LysandreJik"><code>@​LysandreJik</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/22912">#22912</a></li> <li>Add LlamaIndex to awesome-transformers.md by <a href="https://github.com/ravi03071991"><code>@​ravi03071991</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23484">#23484</a></li> <li>add cleanlab to awesome-transformers tools list by <a href="https://github.com/jwmueller"><code>@​jwmueller</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23440">#23440</a></li> </ul> <h2>4-bit quantization and QLoRA</h2> <p>By leveraging the <code>bitsandbytes</code> library by <a href="https://github.com/TimDettmers"><code>@​TimDettmers</code></a>, we add 4-bit support to <code>transformers</code> models!</p> <ul> <li>4-bit QLoRA via bitsandbytes (4-bit base model + LoRA) by <a href="https://github.com/TimDettmers"><code>@​TimDettmers</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23479">#23479</a></li> </ul> <h2>Agents</h2> <p>The Agents framework has been improved and continues to be stabilized. Among bug fixes, here are the important new features that were added:</p> <ul> <li>Local agent capabilities, to load a generative model directly from <code>transformers</code> instead of relying on APIs.</li> <li>Prompts are now hosted on the Hub, which means that anyone can fork the prompts and update them with theirs, to let other community contributors re-use them</li> <li>We add an <code>AzureOpenAiAgent</code> class to support Azure OpenAI agents.</li> </ul> <ul> <li>Add local agent by <a href="https://github.com/sgugger"><code>@​sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23438">#23438</a></li> <li>Enable prompts on the Hub by <a href="https://github.com/sgugger"><code>@​sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23662">#23662</a></li> <li>Add AzureOpenAiAgent by <a href="https://github.com/sgugger"><code>@​sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/24058">#24058</a></li> </ul> <h2>Safetensors</h2> <p>The <code>safetensors</code> library is a safe serialization framework for machine learning tensors. It has been audited and will become the default serialization framework for several organizations (Hugging Face, EleutherAI, Stability AI).</p> <p>It has now become a core dependency of <code>transformers</code>.</p> <ul> <li>Making <code>safetensors</code> a core dependency. by <a href="https://github.com/Narsil"><code>@​Narsil</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23254">#23254</a></li> </ul> <h2>New models</h2> <h3>Swiftformer</h3> <p>The SwiftFormer paper introduces a novel efficient additive attention mechanism that effectively replaces the quadratic matrix multiplication operations in the self-attention computation with linear element-wise multiplications. A series of models called ‘SwiftFormer’ is built based on this, which achieves state-of-the-art performance in terms of both accuracy and mobile inference speed. Even their small variant achieves 78.5% top-1 ImageNet1K accuracy with only 0.8 ms latency on iPhone 14, which is more accurate and 2× faster compared to MobileViT-v2.</p> <ul> <li>Add swiftformer by <a href="https://github.com/shehanmunasinghe"><code>@​shehanmunasinghe</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/22686">#22686</a></li> </ul> <h3>Autoformer</h3> <p>This model augments the Transformer as a deep decomposition architecture, which can progressively decompose the trend and seasonal components during the forecasting process.</p> <ul> <li>[Time-Series] Autoformer model by <a href="https://github.com/elisim"><code>@​elisim</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/21891">#21891</a></li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/huggingface/transformers/commit/fe861e578f50dc9c06de33cd361d2f625017e624"><code>fe861e5</code></a> [<code>GPT2</code>] Add correct keys on <code>_keys_to_ignore_on_load_unexpected</code> on all chil...</li> <li><a href="https://github.com/huggingface/transformers/commit/b3e27a80578d022301611363b890107244e12354"><code>b3e27a8</code></a> Update the pin on Accelerate (<a href="https://redirect.github.com/huggingface/transformers/issues/24110">#24110</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/53e1f5cf66d320b9c809f3940c707b6fef435d2d"><code>53e1f5c</code></a> [<code>Trainer</code>] Correct behavior of <code>_load_best_model</code> for PEFT models (<a href="https://redirect.github.com/huggingface/transformers/issues/24103">#24103</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/17db177714b03103bb94cd71b7dd414bc63bffd5"><code>17db177</code></a> reset accelerate env variables after each test (<a href="https://redirect.github.com/huggingface/transformers/issues/24107">#24107</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/905892f09027cab690918c7766fea1bb51bcdd26"><code>905892f</code></a> Release: v4.30.0</li> <li><a href="https://github.com/huggingface/transformers/commit/c3572e6bfba13ce6dc3fedb05cd1a946ea109576"><code>c3572e6</code></a> Add AzureOpenAiAgent (<a href="https://redirect.github.com/huggingface/transformers/issues/24058">#24058</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/5eb3d3c7023ed0522d3c743ee2e13d896a3aa788"><code>5eb3d3c</code></a> Up pinned accelerate version (<a href="https://redirect.github.com/huggingface/transformers/issues/24089">#24089</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/d1c039e39864a41f6eb8b770a65f123c40164ea5"><code>d1c039e</code></a> fix accelerator prepare during eval only mode (<a href="https://redirect.github.com/huggingface/transformers/issues/24014">#24014</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/2c887cf8e0cb1ac96d28361ff3235a77f83c36ee"><code>2c887cf</code></a> Do not prepare lr scheduler as it as the right number of steps (<a href="https://redirect.github.com/huggingface/transformers/issues/24088">#24088</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/12298cb65c7e9d615b749dde935a0b4966f4ae49"><code>12298cb</code></a> fix executable batch size issue (<a href="https://redirect.github.com/huggingface/transformers/issues/24067">#24067</a>)</li> <li>Additional commits viewable in <a href="https://github.com/huggingface/transformers/compare/v4.26.0...v4.30.0">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=transformers&package-manager=pip&previous-version=4.26.0&new-version=4.30.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
[ 21 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ "dependencies" ]
https://api.github.com/repos/huggingface/transformers/issues/22608
TITLE [DO NOT MERGE] Add Crop Transformation COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY # What does this PR do? Abstracts out cropping logic to be a more generic `crop` function which other, more specific cropping functions e.g. `center_crop` can call. Motivation: * The output of the CLIP feature extractor changed after #17628. This was due to a difference in how the `top` and `left` coordinates were calculated resulting in some values being off by one. * The original CLIP feature extractor matched the original implementation * Having a more generic `crop` method enables each image processor to have its own center_crop logic with minimal code replication. [BEFORE MERGING]: Verify this doesn't have large impact on any popular CLIP dependant pipelines Fixes #22505 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests?
[ 8 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "WIP" ]
https://api.github.com/repos/huggingface/transformers/issues/24276
TITLE [TokenizerSlow] `replace_additional_special_tokens` is not doing much COMMENTS 3 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY Just flagging this as the `add_special_tokens` method got pretty complicated, adding a kwargs, `replace_additional_special_tokens`, that supposedly can prevent replacing the `self._additional_special_tokens` attribute. For any tokenizer, this will remove it from the list, but will not update the internal `trie` and thus has no effect at all: ```python >>> from transformers import XLMRobertaTokenizer >>> tokenizer_a = XLMRobertaTokenizer.from_pretrained('xlm-roberta-base') >>> tokenizer_a.add_special_tokens({"additional_special_tokens":["<//s>"]}) >>> tokenizer_a.additional_special_tokens ['<//s>'] >>> print(tokenizer_a.tokenize("This is a <//s>")) ['▁This', '▁is', '▁a', '<//s>'] >>> tokenizer_a.add_special_tokens({"additional_special_tokens":["<///s>"]}, replace_additional_special_tokens= True) >>> print(tokenizer_a.tokenize("This is a <//s>")) ['▁This', '▁is', '▁a', '<//s>'] ``` This will be addressed in #23909
[ 22 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 ]
[ "Core: Tokenization" ]
https://api.github.com/repos/huggingface/transformers/issues/23825
TITLE [i18n-<languageCode>] Translating docs to <languageName> COMMENTS 0 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY <!-- Note: Please search to see if an issue already exists for the language you are trying to translate. --> Hi! Let's bring the documentation to all the <languageName>-speaking community 🌐 (currently 0 out of 267 complete) Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list. Some notes: * Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗). * Please translate in a gender-neutral way. * Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source). * Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml). * Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @ArthurZucker, @sgugger for review. * 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/). ## Get Started section - [ ] [index.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.mdx) https://github.com/huggingface/transformers/pull/20180 - [ ] [quicktour.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.mdx) (waiting for initial PR to go through) - [ ] [installation.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.mdx). ## Tutorial section - [ ] [pipeline_tutorial.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.mdx) - [ ] [autoclass_tutorial.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.mdx) - [ ] [preprocessing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.mdx) - [ ] [training.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.mdx) - [ ] [accelerate.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.mdx) - [ ] [model_sharing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.mdx) - [ ] [multilingual.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.mdx) <!-- Keep on adding more as you go 🔥 -->
[ 8 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "WIP" ]
https://api.github.com/repos/huggingface/transformers/issues/23846
TITLE Add LaVIN model COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### Model description LaVIN is a vision-language instructed model that is affordable to train (it was trained in a few hours on 8 A100 GPUs) with good performance on ScienceQA. I'd like to add LaVIN to HF transformers. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation The paper [Cheap and Quick: Efficient Vision-Language Instruction Tuning for Large Language Models](https://arxiv.org/pdf/2305.15023.pdf) is by [Gen Luo](https://luogen1996.github.io/), [Yiyi Zhou](https://github.com/huggingface/transformers/issues/new?assignees=&labels=New+model&projects=&template=new-model-addition.yml), [Tianhe Ren](https://rentainhe.github.io/), [Shengxin Chen](https://github.com/huggingface/transformers/issues/new?assignees=&labels=New+model&projects=&template=new-model-addition.yml), [Xiaoshuai Sun](https://sites.google.com/view/xssun), and [Rongrong Ji](https://mac.xmu.edu.cn/rrji/) @luogen1996 has made the code and model weights available at [github.com/luogen1996/LaVIN](https://github.com/luogen1996/LaVIN). The weights for the following models are available at the following links: ### ScienceQA | Model | Weights | Time | Memory | #Params | Acc | Weights | |-----------|----------:|----------:|-------:|--------:|-----:|-----------------:| | LaVIN-7B | LLaMA | 1.4 hours | 33.9G | 3.8M | 89.37 | [google drive](https://drive.google.com/file/d/10X2qCBYrLH1grZOHwHRMXLUoz-S6MSgV/view?usp=share_link) | | LaVIN-7B | Vicuna | 1.4 hours | 33.9G | 3.8M | 89.41 | [google drive](https://drive.google.com/file/d/1nuMxeiWlnJKxDybCshg8pVGSvLc5dZy8/view?usp=share_link) | | LaVIN-13B | LLaMA | 2 hours | 55.9G | 5.4M | 90.54 | [google drive](https://drive.google.com/file/d/1LkKUY54spZkkeXrR7BDmU-xmK9YadcKM/view?usp=share_link) | ### Multimodal ChatBot | Model |Weights | Time | Memory | #Params | Acc | Weights | |-----------|----------:|---------:|-------:|--------:|----:|-----------------:| | LaVIN-13B | LLaMA | 75 hours | 55.9G | 5.4M | - | [google drive](https://drive.google.com/file/d/1rHQNSaiGzFHYGgsamtySPYnd5AW4OE9j/view?usp=share_link)|
[ 20 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ "New model" ]
https://api.github.com/repos/huggingface/transformers/issues/24219
TITLE Bump transformers from 4.19.0 to 4.30.0 in /examples/research_projects/codeparrot COMMENTS 2 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY Bumps [transformers](https://github.com/huggingface/transformers) from 4.19.0 to 4.30.0. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/huggingface/transformers/releases">transformers's releases</a>.</em></p> <blockquote> <h2>v4.30.0: 100k, Agents improvements, Safetensors core dependency, Swiftformer, Autoformer, MobileViTv2, timm-as-a-backbone</h2> <h2>100k</h2> <p>Transformers has just reached 100k stars on GitHub, and to celebrate we wanted to highlight 100 projects in the vicinity of <code>transformers</code> and we have decided to create an <a href="https://github.com/huggingface/transformers/blob/main/awesome-transformers.md">awesome-transformers</a> page to do just that.</p> <p>We accept PRs to add projects to the list!</p> <ul> <li>Top 100 by <a href="https://github.com/LysandreJik"><code>@​LysandreJik</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/22912">#22912</a></li> <li>Add LlamaIndex to awesome-transformers.md by <a href="https://github.com/ravi03071991"><code>@​ravi03071991</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23484">#23484</a></li> <li>add cleanlab to awesome-transformers tools list by <a href="https://github.com/jwmueller"><code>@​jwmueller</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23440">#23440</a></li> </ul> <h2>4-bit quantization and QLoRA</h2> <p>By leveraging the <code>bitsandbytes</code> library by <a href="https://github.com/TimDettmers"><code>@​TimDettmers</code></a>, we add 4-bit support to <code>transformers</code> models!</p> <ul> <li>4-bit QLoRA via bitsandbytes (4-bit base model + LoRA) by <a href="https://github.com/TimDettmers"><code>@​TimDettmers</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23479">#23479</a></li> </ul> <h2>Agents</h2> <p>The Agents framework has been improved and continues to be stabilized. Among bug fixes, here are the important new features that were added:</p> <ul> <li>Local agent capabilities, to load a generative model directly from <code>transformers</code> instead of relying on APIs.</li> <li>Prompts are now hosted on the Hub, which means that anyone can fork the prompts and update them with theirs, to let other community contributors re-use them</li> <li>We add an <code>AzureOpenAiAgent</code> class to support Azure OpenAI agents.</li> </ul> <ul> <li>Add local agent by <a href="https://github.com/sgugger"><code>@​sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23438">#23438</a></li> <li>Enable prompts on the Hub by <a href="https://github.com/sgugger"><code>@​sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23662">#23662</a></li> <li>Add AzureOpenAiAgent by <a href="https://github.com/sgugger"><code>@​sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/24058">#24058</a></li> </ul> <h2>Safetensors</h2> <p>The <code>safetensors</code> library is a safe serialization framework for machine learning tensors. It has been audited and will become the default serialization framework for several organizations (Hugging Face, EleutherAI, Stability AI).</p> <p>It has now become a core dependency of <code>transformers</code>.</p> <ul> <li>Making <code>safetensors</code> a core dependency. by <a href="https://github.com/Narsil"><code>@​Narsil</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23254">#23254</a></li> </ul> <h2>New models</h2> <h3>Swiftformer</h3> <p>The SwiftFormer paper introduces a novel efficient additive attention mechanism that effectively replaces the quadratic matrix multiplication operations in the self-attention computation with linear element-wise multiplications. A series of models called ‘SwiftFormer’ is built based on this, which achieves state-of-the-art performance in terms of both accuracy and mobile inference speed. Even their small variant achieves 78.5% top-1 ImageNet1K accuracy with only 0.8 ms latency on iPhone 14, which is more accurate and 2× faster compared to MobileViT-v2.</p> <ul> <li>Add swiftformer by <a href="https://github.com/shehanmunasinghe"><code>@​shehanmunasinghe</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/22686">#22686</a></li> </ul> <h3>Autoformer</h3> <p>This model augments the Transformer as a deep decomposition architecture, which can progressively decompose the trend and seasonal components during the forecasting process.</p> <ul> <li>[Time-Series] Autoformer model by <a href="https://github.com/elisim"><code>@​elisim</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/21891">#21891</a></li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/huggingface/transformers/commit/fe861e578f50dc9c06de33cd361d2f625017e624"><code>fe861e5</code></a> [<code>GPT2</code>] Add correct keys on <code>_keys_to_ignore_on_load_unexpected</code> on all chil...</li> <li><a href="https://github.com/huggingface/transformers/commit/b3e27a80578d022301611363b890107244e12354"><code>b3e27a8</code></a> Update the pin on Accelerate (<a href="https://redirect.github.com/huggingface/transformers/issues/24110">#24110</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/53e1f5cf66d320b9c809f3940c707b6fef435d2d"><code>53e1f5c</code></a> [<code>Trainer</code>] Correct behavior of <code>_load_best_model</code> for PEFT models (<a href="https://redirect.github.com/huggingface/transformers/issues/24103">#24103</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/17db177714b03103bb94cd71b7dd414bc63bffd5"><code>17db177</code></a> reset accelerate env variables after each test (<a href="https://redirect.github.com/huggingface/transformers/issues/24107">#24107</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/905892f09027cab690918c7766fea1bb51bcdd26"><code>905892f</code></a> Release: v4.30.0</li> <li><a href="https://github.com/huggingface/transformers/commit/c3572e6bfba13ce6dc3fedb05cd1a946ea109576"><code>c3572e6</code></a> Add AzureOpenAiAgent (<a href="https://redirect.github.com/huggingface/transformers/issues/24058">#24058</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/5eb3d3c7023ed0522d3c743ee2e13d896a3aa788"><code>5eb3d3c</code></a> Up pinned accelerate version (<a href="https://redirect.github.com/huggingface/transformers/issues/24089">#24089</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/d1c039e39864a41f6eb8b770a65f123c40164ea5"><code>d1c039e</code></a> fix accelerator prepare during eval only mode (<a href="https://redirect.github.com/huggingface/transformers/issues/24014">#24014</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/2c887cf8e0cb1ac96d28361ff3235a77f83c36ee"><code>2c887cf</code></a> Do not prepare lr scheduler as it as the right number of steps (<a href="https://redirect.github.com/huggingface/transformers/issues/24088">#24088</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/12298cb65c7e9d615b749dde935a0b4966f4ae49"><code>12298cb</code></a> fix executable batch size issue (<a href="https://redirect.github.com/huggingface/transformers/issues/24067">#24067</a>)</li> <li>Additional commits viewable in <a href="https://github.com/huggingface/transformers/compare/v4.19.0...v4.30.0">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=transformers&package-manager=pip&previous-version=4.19.0&new-version=4.30.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
[ 21 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ "dependencies" ]
https://api.github.com/repos/huggingface/transformers/issues/24222
TITLE Bump transformers from 4.26.1 to 4.30.0 in /examples/tensorflow/language-modeling-tpu COMMENTS 4 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY Bumps [transformers](https://github.com/huggingface/transformers) from 4.26.1 to 4.30.0. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/huggingface/transformers/releases">transformers's releases</a>.</em></p> <blockquote> <h2>v4.30.0: 100k, Agents improvements, Safetensors core dependency, Swiftformer, Autoformer, MobileViTv2, timm-as-a-backbone</h2> <h2>100k</h2> <p>Transformers has just reached 100k stars on GitHub, and to celebrate we wanted to highlight 100 projects in the vicinity of <code>transformers</code> and we have decided to create an <a href="https://github.com/huggingface/transformers/blob/main/awesome-transformers.md">awesome-transformers</a> page to do just that.</p> <p>We accept PRs to add projects to the list!</p> <ul> <li>Top 100 by <a href="https://github.com/LysandreJik"><code>@​LysandreJik</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/22912">#22912</a></li> <li>Add LlamaIndex to awesome-transformers.md by <a href="https://github.com/ravi03071991"><code>@​ravi03071991</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23484">#23484</a></li> <li>add cleanlab to awesome-transformers tools list by <a href="https://github.com/jwmueller"><code>@​jwmueller</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23440">#23440</a></li> </ul> <h2>4-bit quantization and QLoRA</h2> <p>By leveraging the <code>bitsandbytes</code> library by <a href="https://github.com/TimDettmers"><code>@​TimDettmers</code></a>, we add 4-bit support to <code>transformers</code> models!</p> <ul> <li>4-bit QLoRA via bitsandbytes (4-bit base model + LoRA) by <a href="https://github.com/TimDettmers"><code>@​TimDettmers</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23479">#23479</a></li> </ul> <h2>Agents</h2> <p>The Agents framework has been improved and continues to be stabilized. Among bug fixes, here are the important new features that were added:</p> <ul> <li>Local agent capabilities, to load a generative model directly from <code>transformers</code> instead of relying on APIs.</li> <li>Prompts are now hosted on the Hub, which means that anyone can fork the prompts and update them with theirs, to let other community contributors re-use them</li> <li>We add an <code>AzureOpenAiAgent</code> class to support Azure OpenAI agents.</li> </ul> <ul> <li>Add local agent by <a href="https://github.com/sgugger"><code>@​sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23438">#23438</a></li> <li>Enable prompts on the Hub by <a href="https://github.com/sgugger"><code>@​sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23662">#23662</a></li> <li>Add AzureOpenAiAgent by <a href="https://github.com/sgugger"><code>@​sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/24058">#24058</a></li> </ul> <h2>Safetensors</h2> <p>The <code>safetensors</code> library is a safe serialization framework for machine learning tensors. It has been audited and will become the default serialization framework for several organizations (Hugging Face, EleutherAI, Stability AI).</p> <p>It has now become a core dependency of <code>transformers</code>.</p> <ul> <li>Making <code>safetensors</code> a core dependency. by <a href="https://github.com/Narsil"><code>@​Narsil</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23254">#23254</a></li> </ul> <h2>New models</h2> <h3>Swiftformer</h3> <p>The SwiftFormer paper introduces a novel efficient additive attention mechanism that effectively replaces the quadratic matrix multiplication operations in the self-attention computation with linear element-wise multiplications. A series of models called ‘SwiftFormer’ is built based on this, which achieves state-of-the-art performance in terms of both accuracy and mobile inference speed. Even their small variant achieves 78.5% top-1 ImageNet1K accuracy with only 0.8 ms latency on iPhone 14, which is more accurate and 2× faster compared to MobileViT-v2.</p> <ul> <li>Add swiftformer by <a href="https://github.com/shehanmunasinghe"><code>@​shehanmunasinghe</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/22686">#22686</a></li> </ul> <h3>Autoformer</h3> <p>This model augments the Transformer as a deep decomposition architecture, which can progressively decompose the trend and seasonal components during the forecasting process.</p> <ul> <li>[Time-Series] Autoformer model by <a href="https://github.com/elisim"><code>@​elisim</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/21891">#21891</a></li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/huggingface/transformers/commit/fe861e578f50dc9c06de33cd361d2f625017e624"><code>fe861e5</code></a> [<code>GPT2</code>] Add correct keys on <code>_keys_to_ignore_on_load_unexpected</code> on all chil...</li> <li><a href="https://github.com/huggingface/transformers/commit/b3e27a80578d022301611363b890107244e12354"><code>b3e27a8</code></a> Update the pin on Accelerate (<a href="https://redirect.github.com/huggingface/transformers/issues/24110">#24110</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/53e1f5cf66d320b9c809f3940c707b6fef435d2d"><code>53e1f5c</code></a> [<code>Trainer</code>] Correct behavior of <code>_load_best_model</code> for PEFT models (<a href="https://redirect.github.com/huggingface/transformers/issues/24103">#24103</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/17db177714b03103bb94cd71b7dd414bc63bffd5"><code>17db177</code></a> reset accelerate env variables after each test (<a href="https://redirect.github.com/huggingface/transformers/issues/24107">#24107</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/905892f09027cab690918c7766fea1bb51bcdd26"><code>905892f</code></a> Release: v4.30.0</li> <li><a href="https://github.com/huggingface/transformers/commit/c3572e6bfba13ce6dc3fedb05cd1a946ea109576"><code>c3572e6</code></a> Add AzureOpenAiAgent (<a href="https://redirect.github.com/huggingface/transformers/issues/24058">#24058</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/5eb3d3c7023ed0522d3c743ee2e13d896a3aa788"><code>5eb3d3c</code></a> Up pinned accelerate version (<a href="https://redirect.github.com/huggingface/transformers/issues/24089">#24089</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/d1c039e39864a41f6eb8b770a65f123c40164ea5"><code>d1c039e</code></a> fix accelerator prepare during eval only mode (<a href="https://redirect.github.com/huggingface/transformers/issues/24014">#24014</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/2c887cf8e0cb1ac96d28361ff3235a77f83c36ee"><code>2c887cf</code></a> Do not prepare lr scheduler as it as the right number of steps (<a href="https://redirect.github.com/huggingface/transformers/issues/24088">#24088</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/12298cb65c7e9d615b749dde935a0b4966f4ae49"><code>12298cb</code></a> fix executable batch size issue (<a href="https://redirect.github.com/huggingface/transformers/issues/24067">#24067</a>)</li> <li>Additional commits viewable in <a href="https://github.com/huggingface/transformers/compare/v4.26.1...v4.30.0">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=transformers&package-manager=pip&previous-version=4.26.1&new-version=4.30.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
[ 21 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ "dependencies" ]
https://api.github.com/repos/huggingface/transformers/issues/24924
TITLE `VisionTextDualEncoder`: Distributed training is always enabled COMMENTS 6 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info - `transformers` version: 4.32.0.dev0 - Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.31 - Python version: 3.10.10 - Huggingface_hub version: 0.14.1 - Safetensors version: 0.3.1 - Accelerate version: 0.21.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.0+cu117 (True) - Tensorflow version (GPU?): 2.13.0 (False) - Flax version (CPU?/GPU?/TPU?): 0.7.0 (cpu) - Jax version: 0.4.13 - JaxLib version: 0.4.13 - Using GPU in script?: yes - Using distributed or parallel set-up in script?: **It seems yes, but I don't want to ;)** ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Hi, I'm running the **unchanged** ["VisionTextDualEncoder and CLIP model training example"](https://github.com/huggingface/transformers/blob/main/examples/pytorch/contrastive-image-text/run_clip.py) on my local laptop (which has 1 GPU) and wonder why it claims to do `distributed training: True` (and not `False`). From the output: ``` 07/19/2023 15:21:22 - WARNING - __main__ - Process rank: 0, device: cuda:0, n_gpu: 1distributed training: True, 16-bits training: False ``` The above output originates from [`run_clip.py`](https://github.com/huggingface/transformers/blob/ee4250a35f3bd5e9a4379b4907b3d8f9d5d9523f/examples/pytorch/contrastive-image-text/run_clip.py#L260C1-L263C6) ``` logger.warning( f"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}" + f"distributed training: {bool(training_args.local_rank != -1)}, 16-bits training: {training_args.fp16}" ) ``` * The default should be `training_args.local_rank=-1` according to [`TrainingArguments`](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments) but is somehow set to `0` in this example and I don't know why. * Adding `local_rank=-1` to the [run_clip.py example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/contrastive-image-text#train-the-model) does not show any effect. My questions: * Is it intended that `local_rank` is set to `0`? * Does `local_rank=0` really mean that distributed training in `Trainer` is enabled? (I'm new to `Trainer` and usually work with `DistributedDataParallel`) * How to switch off distributed training? --- Bigger picture: Sometimes my training (on a cluster) hangs up in n-1 iteration and never finishes. I wonder if this has to do with distributed training. I don't know how to debug this. ``` 100%|█████████▉| 2875/2876 [11:34<00:00, 4.10it/s] ```` Thanks in advance! ### Expected behavior I don't want to use distributed training, i.e. `training_args.local_rank = -1`
[ 25 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0 ]
[ "solved" ]
https://api.github.com/repos/huggingface/transformers/issues/23181
TITLE Add BROS COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### Model description [BROS(BERT Relying On Spatiality)](https://arxiv.org/abs/2108.04539) is a pre-trained multimodal transformer for Document Understanding using OCR results of document images (text and bounding box pairs). and I would like to add this model to Huggingface as my first contribution! ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation https://github.com/clovaai/bros
[ 20 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ "New model" ]
https://api.github.com/repos/huggingface/transformers/issues/23077
TITLE [i18n-<languageCode>] Translating docs to <languageName> COMMENTS 0 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY <!-- Note: Please search to see if an issue already exists for the language you are trying to translate. --> Hi! Let's bring the documentation to all the <languageName>-speaking community 🌐 (currently 0 out of 267 complete) Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list. Some notes: * Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗). * Please translate in a gender-neutral way. * Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source). * Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml). * Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @ArthurZucker, @sgugger for review. * 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/). ## Get Started section - [ ] [index.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.mdx) https://github.com/huggingface/transformers/pull/20180 - [ ] [quicktour.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.mdx) (waiting for initial PR to go through) - [ ] [installation.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.mdx). ## Tutorial section - [ ] [pipeline_tutorial.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.mdx) - [ ] [autoclass_tutorial.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.mdx) - [ ] [preprocessing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.mdx) - [ ] [training.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.mdx) - [ ] [accelerate.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.mdx) - [ ] [model_sharing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.mdx) - [ ] [multilingual.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.mdx) <!-- Keep on adding more as you go 🔥 -->
[ 8 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "WIP" ]
https://api.github.com/repos/huggingface/transformers/issues/23674
TITLE custom stopping_critriea function doesn't receive logits scores (receives None instead) COMMENTS 4 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info - `transformers` version: 4.29.2 - Platform: Linux-5.15.107+-x86_64-with-glibc2.31 - Python version: 3.10.11 - Huggingface_hub version: 0.14.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.1+cu118 (True) - Tensorflow version (GPU?): 2.12.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.6.9 (gpu) - Jax version: 0.4.8 - JaxLib version: 0.4.7 - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Reproduction Steps: 1. Initialize a BART model & its tokenizer (in my case it is facebook/bart-large) 2. Create a custom stopping_criteria function and add it to StoppingCriteriaList object 3. Run model.generate() with the your stopping criteria list as argument Scores argument is always None Example code: ```python import torch from transformers import StoppingCriteriaList, BartForConditionalGeneration, BartTokenizer def custom_stopping_criteria(input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool: print ("Scores:", scores) return False stopping_criteria = StoppingCriteriaList([custom_stopping_criteria]) model = BartForConditionalGeneration.from_pretrained("facebook/bart-large", forced_bos_token_id=0) tok = BartTokenizer.from_pretrained("facebook/bart-large") example_english_phrase = "UN Chief Says There Is No <mask> in Syria" batch = tok(example_english_phrase, return_tensors="pt") model.generate(batch["input_ids"], stopping_criteria=stopping_criteria) ``` The above code uses a stopping critriea that just prints the scores value when called (which prints None) ### Expected behavior The expected behavior should be to have Scores logits populated with values instead of being None (values before or after softmax don't matter)
[ 8 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "WIP" ]
https://api.github.com/repos/huggingface/transformers/issues/24999
TITLE dataloading bug after upgrading to 4.31.0 COMMENTS 7 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info transformers=4.31.0 pytorch=1.13.1 ### Who can help? Hi @sgugger and @ArthurZucker When I used transformers 4.29.2 and 4.30.2 with the streaming dataset and local batch size=1, I didn't pad the text sequence and everything goes well. However, after I upgrade the transformers to 4.31.0. My previous training pipeline fails. Error messages are: File "myenv/lib/python3.8/site-packages/accelerate/data_loader.py", line 556, in __iter__ next_batch, next_batch_info = self._fetch_batches(main_iterator) File "myenv/lib/python3.8/site-packages/accelerate/data_loader.py", line 520, in _fetch_batches batch = concatenate(batches, dim=0) File "myenv/lib/python3.8/site-packages/accelerate/utils/operations.py", line 441, in concatenate return type(data[0])({k: concatenate([d[k] for d in data], dim=dim) for k in data[0].keys()}) File "myenv/lib/python3.8/site-packages/accelerate/utils/operations.py", line 441, in <dictcomp> return type(data[0])({k: concatenate([d[k] for d in data], dim=dim) for k in data[0].keys()}) File "myenv/lib/python3.8/site-packages/accelerate/utils/operations.py", line 444, in concatenate return torch.cat(data, dim=dim) RuntimeError: Sizes of tensors must match except in dimension 0. Expected size 655 but got size 563 for tensor number 1 in the list. I find that in the following function in data_loader.py (from accelerate), the variable "batches" contain examples with different lengths, causing the error. For example, I trained my model on 4 GPUs with local batch size=1. Then, the list batches will have 4 elements (each is a batch of 1 example). But these 4 elements may have different lengths, causing the above error when concatenating. However, as my local batch size=1, there should be no need to make the samples to be in the same length. I think it is a bug introduced in 4.31.0 because in the previous transformers version (e.g., 4.29.2 and 4.30.2), the training script can run smoothly without raising the error. I look forward to your comments and suggestions. Thank you def _fetch_batches(self, iterator): batches, batch = None, None # On process 0, we gather the batch to dispatch. if self.state.process_index == 0: try: if self.split_batches: # One batch of the main iterator is dispatched and split. batch = next(iterator) else: # num_processes batches of the main iterator are concatenated then dispatched and split. # We add the batches one by one so we have the remainder available when drop_last=False. batches = [] for _ in range(self.state.num_processes): batches.append(next(iterator)) batch = concatenate(batches, dim=0) ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction dataset = load_dataset("json", data_files={"train": train_file, "eval": eval_file}, streaming=True)
 dataset = dataset.with_format("torch") 
train_dataset = dataset["train"] 
eval_dataset = dataset["eval"]

 train_dataset = train_dataset.map(tokenize_function, batched=True)
 eval_dataset = eval_dataset.map(tokenize_function, batched=True) 
train_model(model, train_dataset, eval_dataset) ### Expected behavior no error message and training smoothly
[ 25 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0 ]
[ "solved" ]
https://api.github.com/repos/huggingface/transformers/issues/23764
TITLE Whisper `get_prompt_ids` throws error when used with a 'FastTokenizer' COMMENTS 2 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info - `transformers` version: 4.30.0.dev0 - Platform: macOS-13.0-arm64-arm-64bit - Python version: 3.9.16 - Huggingface_hub version: 0.12.0 - Safetensors version: 0.2.8 - PyTorch version (GPU?): 1.13.1 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): 0.5.3 (cpu) - Jax version: 0.3.6 - JaxLib version: 0.3.5 - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @sanchit-gandhi @hollance ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```py from transformers import WhisperTokenizerFast, WhisperTokenizer, GPT2Tokenizer, GPT2TokenizerFast slow_tokenizer = WhisperTokenizer.from_pretrained('openai/whisper-tiny') prompt_ids = slow_tokenizer.get_prompt_ids("Hello, world!", return_tensors="pt") print('Whisper slow tokenizer succeeded') try: fast_tokenizer = WhisperTokenizerFast.from_pretrained('openai/whisper-tiny') prompt_ids = fast_tokenizer.get_prompt_ids("Hello, world!", return_tensors="pt") except Exception as e: print('Whisper fast tokenizer failed - ', e) # Alternatively, this slow-fast param difference can be seen when tokenizing with a # pipeline or any model that has a slow tokenizer `prepare_for_tokenization` method # that checks `add_prefix_space` (GPT2 is old but there are ~20 models this applies to) tokenizer = GPT2Tokenizer.from_pretrained('gpt2', use_fast=False) prompt_ids = tokenizer("Hello, world!", add_prefix_space=True)["input_ids"] print('GPT2 slow tokenizer succeeded') try: tokenizer = GPT2TokenizerFast.from_pretrained('gpt2') prompt_ids = tokenizer("Hello, world!", add_prefix_space=True)["input_ids"] except Exception as e: print('Whisper fast tokenizer failed - ', e) ``` ### Expected behavior Are the slow and fast tokenizers supposed to have the same arg options for tokenizing text? They diverge with the `add_prefix_space` argument; while the slow tokenizer accepts and applies it with the [prepare_for_tokenization](https://github.com/huggingface/transformers/blob/3416bba7c70c358ac17efd3be31e9090135969ab/src/transformers/tokenization_utils.py#L502) method that same model's fast tokenizer does not and throws an error. Given that this arg difference appears to be present across all models where `add_prefix_space` can be provided to the slow tokenizer (at a glance appears to be ~20) I'd imagine the answer is no, the arg options aren't supposed to be 1:1. The fix for the Whisper tokenizer `get_prompt_ids` method is straightforward as we can just do `" " + text` directly in the method instead of `add_prefix_space=True`, but I wanted to bring up the above in case that argument is actually supposed to compatible across both slow and fast tokenizers in which case we would also want to address that.
[ 22 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 ]
[ "Core: Tokenization" ]
https://api.github.com/repos/huggingface/transformers/issues/24264
TITLE MeZo Forward Pass Implementation COMMENTS 5 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### Feature request https://github.com/princeton-nlp/MeZO/blob/main/large_models/trainer.py ### Motivation Faster training ### Your contribution Just a user atm.
[ 19 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Feature request" ]
https://api.github.com/repos/huggingface/transformers/issues/24309
TITLE saving model fails with deepspeed COMMENTS 10 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info System Info transformers v4.30.0 python 3.8 There is a bug [here](https://github.com/huggingface/transformers/blob/0b7b4429c78de68acaf72224eb6dae43616d820c/src/transformers/trainer.py#LL2257C59-L2257C59), No `PretrainedModel` does not have `save_checkpoint` method. Error trace ``` Traceback (most recent call last): File "funtuner/trainer.py", line 98, in train trainer.train() File "/nfshome/store03/users/c.scmse/venv/lib/python3.8/site-packages/transformers/trainer.py", line 1540, in train return inner_training_loop( File "/nfshome/store03/users/c.scmse/venv/lib/python3.8/site-packages/transformers/trainer.py", line 1884, in _inner_training_loop self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval) File "/nfshome/store03/users/c.scmse/venv/lib/python3.8/site-packages/transformers/trainer.py", line 2196, in _maybe_log_save_evaluate self._save_checkpoint(model, trial, metrics=metrics) File "/nfshome/store03/users/c.scmse/venv/lib/python3.8/site-packages/transformers/trainer.py", line 2257, in _save_checkpoint self.model_wrapped.save_checkpoint(output_dir) File "/nfshome/store03/users/c.scmse/venv/lib/python3.8/site-packages/peft/peft_model.py", line 289, in __getattr__ return getattr(self.base_model, name) File "/nfshome/store03/users/c.scmse/venv/lib/python3.8/site-packages/peft/tuners/lora.py", line 206, in __getattr__ return getattr(self.model, name) File "/nfshome/store03/users/c.scmse/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1614, in __getattr__ raise AttributeError("'{}' object has no attribute '{}'".format( AttributeError: 'GPTNeoXForCausalLM' object has no attribute 'save_checkpoint' ``` ### Who can help? @pacman100 ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction My code is [here](https://github.com/explodinggradients/Funtuner/blob/main/funtuner/trainer.py) Run python3 funtuner/trainer.py - export PYTHONPATH="${PYTHONPATH}:/your-path/Funtuner" - please change the log_dir to your folder [here](https://github.com/explodinggradients/Funtuner/blob/c4e66209d5ee276a7eb8caf582435f1eaafbf18f/funtuner/config/config.yaml#L4) also you might want to set log_wandb=False - `dev-train` branch ### Expected behavior Please ensure that model training is running atleast 1000 steps without any errors.
[ 25 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0 ]
[ "solved" ]
https://api.github.com/repos/huggingface/transformers/issues/22923
TITLE Need support for Sentence Similarity Pipeline COMMENTS 3 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### Feature request HuggingFace now has a lot of Sentence Similarity models, but the pipeline does not yet support this: https://huggingface.co/docs/transformers/main_classes/pipelines ### Motivation HuggingFace now has a lot of Sentence Similarity models, but the pipeline does not yet support this: https://huggingface.co/docs/transformers/main_classes/pipelines ### Your contribution I can write a PR, but might need some one else's help.
[ 19 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Feature request" ]
https://api.github.com/repos/huggingface/transformers/issues/25097
TITLE Bump certifi from 2022.12.7 to 2023.7.22 in /examples/research_projects/visual_bert COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY [//]: # (dependabot-start) ⚠️ **Dependabot is rebasing this PR** ⚠️ Rebasing might not happen immediately, so don't worry if this takes some time. Note: if you make any changes to this PR yourself, they will take precedence over the rebase. --- [//]: # (dependabot-end) Bumps [certifi](https://github.com/certifi/python-certifi) from 2022.12.7 to 2023.7.22. <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/certifi/python-certifi/commit/8fb96ed81f71e7097ed11bc4d9b19afd7ea5c909"><code>8fb96ed</code></a> 2023.07.22</li> <li><a href="https://github.com/certifi/python-certifi/commit/afe77220e0eaa722593fc5d294213ff5275d1b40"><code>afe7722</code></a> Bump actions/setup-python from 4.6.1 to 4.7.0 (<a href="https://redirect.github.com/certifi/python-certifi/issues/230">#230</a>)</li> <li><a href="https://github.com/certifi/python-certifi/commit/2038739ad56abec7aaddfa90ad2ce6b3ed7f5c7b"><code>2038739</code></a> Bump dessant/lock-threads from 3.0.0 to 4.0.1 (<a href="https://redirect.github.com/certifi/python-certifi/issues/229">#229</a>)</li> <li><a href="https://github.com/certifi/python-certifi/commit/44df761f4c09d19f32b3cc09208a739043a5e25b"><code>44df761</code></a> Hash pin Actions and enable dependabot (<a href="https://redirect.github.com/certifi/python-certifi/issues/228">#228</a>)</li> <li><a href="https://github.com/certifi/python-certifi/commit/8b3d7bae85bbc87c9181cc1d39548db3d31627f0"><code>8b3d7ba</code></a> 2023.05.07</li> <li><a href="https://github.com/certifi/python-certifi/commit/53da2405b1af430f6bafa21ba45d8dd8dfc726b8"><code>53da240</code></a> ci: Add Python 3.12-dev to the testing (<a href="https://redirect.github.com/certifi/python-certifi/issues/224">#224</a>)</li> <li><a href="https://github.com/certifi/python-certifi/commit/c2fc3b1f64d6946f1057971ee897ea828ae848d8"><code>c2fc3b1</code></a> Create a Security Policy (<a href="https://redirect.github.com/certifi/python-certifi/issues/222">#222</a>)</li> <li><a href="https://github.com/certifi/python-certifi/commit/c211ef482a01aff5f1bc92c4128bfa0c955f4a01"><code>c211ef4</code></a> Set up permissions to github workflows (<a href="https://redirect.github.com/certifi/python-certifi/issues/218">#218</a>)</li> <li><a href="https://github.com/certifi/python-certifi/commit/2087de5d0aa1d472145fc1dbdfece3fe652bbac5"><code>2087de5</code></a> Don't let deprecation warning fail CI (<a href="https://redirect.github.com/certifi/python-certifi/issues/219">#219</a>)</li> <li><a href="https://github.com/certifi/python-certifi/commit/e0b9fc5c8f52ac8c300da502e5760ce3d41429ec"><code>e0b9fc5</code></a> remove paragraphs about 1024-bit roots from README</li> <li>Additional commits viewable in <a href="https://github.com/certifi/python-certifi/compare/2022.12.07...2023.07.22">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=certifi&package-manager=pip&previous-version=2022.12.7&new-version=2023.7.22)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
[ 21 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ "dependencies" ]
https://api.github.com/repos/huggingface/transformers/issues/23087
TITLE RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.LongTensor [1, 128]] is at version 3; expected version 2 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck! COMMENTS 5 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info - `transformers` version: 4.25.1 - Platform: Linux-5.13.0-27-generic-x86_64-with-debian-bullseye-sid - Python version: 3.7.15 - Huggingface_hub version: 0.11.1 - PyTorch version (GPU?): 1.10.1+cu113 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @sgugger @ArthurZucker ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I want to train a embedding-based retrieval qa system by minimizing the contrastive loss of correct (q,a) pairs against in-batch negatives. I also want it to be run on multiple gpus. But I run into the problem of backward propagation in position embedding layer of BERT (which I infer from the error log) when runing in distributed manner. I don't know where is broken (trainer? BertModel? pytorch?) btw, the code works in single gpu setting Command that I ran: ```bash torchrun --nproc_per_node 2 retrieval_qa.py \ --model_name_or_path bert-base-uncased \ --output_dir debug \ --max_steps 10000 \ --remove_unused_columns False \ --learning_rate 5e-5 \ --logging_steps 10 \ --save_steps 500 \ --warmup_ratio 0.0 \ --per_device_train_batch_size 16 \ --normalize True ``` Error details: ```bash ***** Running training ***** Num examples = 20360 Num Epochs = 16 Instantaneous batch size per device = 16 Total train batch size (w. parallel, distributed & accumulation) = 32 Gradient Accumulation steps = 1 Total optimization steps = 10000 Number of trainable parameters = 109482240 0%| | 0/10000 [00:00<?, ?it/s][W python_anomaly_mode.cpp:104] Warning: Error detected in EmbeddingBackward0. Traceback of forward call that caused the error: File "retrieval_qa.py", line 213, in <module> main() File "retrieval_qa.py", line 209, in main trainer.train() File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/transformers/trainer.py", line 1531, in train ignore_keys_for_eval=ignore_keys_for_eval, File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/transformers/trainer.py", line 1775, in _inner_training_loop tr_loss_step = self.training_step(model, inputs) File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/transformers/trainer.py", line 2523, in training_step loss = self.compute_loss(model, inputs) File "retrieval_qa.py", line 142, in compute_loss token_type_ids=inputs[k]['token_type_ids'], File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 886, in forward output = self.module(*inputs[0], **kwargs[0]) File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "retrieval_qa.py", line 103, in forward model_output = self.model(**kwargs) File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/transformers/models/bert/modeling_bert.py", line 1019, in forward past_key_values_length=past_key_values_length, File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/transformers/models/bert/modeling_bert.py", line 236, in forward position_embeddings = self.position_embeddings(position_ids) File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/torch/nn/modules/sparse.py", line 160, in forward self.norm_type, self.scale_grad_by_freq, self.sparse) File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/torch/nn/functional.py", line 2044, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) (function _print_stack) [W python_anomaly_mode.cpp:104] Warning: Error detected in EmbeddingBackward0. Traceback of forward call that caused the error: File "retrieval_qa.py", line 213, in <module> main() File "retrieval_qa.py", line 209, in main trainer.train() File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/transformers/trainer.py", line 1531, in train ignore_keys_for_eval=ignore_keys_for_eval, File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/transformers/trainer.py", line 1775, in _inner_training_loop tr_loss_step = self.training_step(model, inputs) File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/transformers/trainer.py", line 2523, in training_step loss = self.compute_loss(model, inputs) File "retrieval_qa.py", line 142, in compute_loss token_type_ids=inputs[k]['token_type_ids'], File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 886, in forward output = self.module(*inputs[0], **kwargs[0]) File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "retrieval_qa.py", line 103, in forward model_output = self.model(**kwargs) File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/transformers/models/bert/modeling_bert.py", line 1019, in forward past_key_values_length=past_key_values_length, File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/transformers/models/bert/modeling_bert.py", line 236, in forward position_embeddings = self.position_embeddings(position_ids) File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/torch/nn/modules/sparse.py", line 160, in forward self.norm_type, self.scale_grad_by_freq, self.sparse) File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/torch/nn/functional.py", line 2044, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) (function _print_stack) Traceback (most recent call last): File "retrieval_qa.py", line 213, in <module> main() File "retrieval_qa.py", line 209, in main trainer.train() File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/transformers/trainer.py", line 1531, in train ignore_keys_for_eval=ignore_keys_for_eval, File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/transformers/trainer.py", line 1775, in _inner_training_loop tr_loss_step = self.training_step(model, inputs) File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/transformers/trainer.py", line 2541, in training_step Traceback (most recent call last): File "retrieval_qa.py", line 213, in <module> loss.backward() File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/torch/_tensor.py", line 307, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) File "/data01/lizehan/anaconda3/envs/beir/lib/python3.7/site-packages/torch/autograd/__init__.py", line 156, in backward allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.LongTensor [1, 128]] is at version 3; expected version 2 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck! ``` Source code of `retrieval_qa.py` ```Python import logging import os import sys from typing import Dict, List, Tuple, Optional, Any, Union import torch from torch import nn from torch.nn import functional as F from transformers import AutoConfig, AutoModel, AutoTokenizer from transformers import ( HfArgumentParser, set_seed, ) import os from dataclasses import dataclass, field from typing import Optional, List from transformers import TrainingArguments from transformers import DataCollatorWithPadding from transformers.trainer import Trainer import logging logger = logging.getLogger(__name__) # Name of the files used for checkpointing TRAINING_ARGS_NAME = "training_args.bin" TRAINER_STATE_NAME = "trainer_state.json" OPTIMIZER_NAME = "optimizer.pt" SCHEDULER_NAME = "scheduler.pt" SCALER_NAME = "scaler.pt" @dataclass class ModelArguments: model_name_or_path: str = field( metadata={"help": "Path to pretrained model or model identifier from huggingface.co/models"} ) config_name: Optional[str] = field( default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"} ) tokenizer_name: Optional[str] = field( default=None, metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"} ) cache_dir: Optional[str] = field( default=None, metadata={"help": "Where do you want to store the pretrained models downloaded from s3"} ) normalize: bool = field(default=False) pooling: str = field(default='mean') @dataclass class QPCollator(DataCollatorWithPadding): """ Wrapper that does conversion from List[Tuple[encode_qry, encode_psg]] to List[qry], List[psg] and pass batch separately to the actual collator. Abstract out data detail for the model. """ max_q_len: int = 32 max_p_len: int = 128 def __call__(self, features): keys = list(features[0].keys()) collated_batch = {} for key in keys: if not isinstance(features[0][key], str): continue text = [f[key] for f in features] # print(text) text_batch = self.tokenizer( text, padding='max_length', truncation=True, max_length=self.max_p_len, return_tensors="pt", ) collated_batch[key] = text_batch return collated_batch class AutoModelForSentenceEmbedding(nn.Module): def __init__( self, model_name_or_path, tokenizer=None, pooling='cls', normalize=True, ): super(AutoModelForSentenceEmbedding, self).__init__() self.model = AutoModel.from_pretrained(model_name_or_path) self.tokenizer = tokenizer if tokenizer else AutoTokenizer.from_pretrained(model_name_or_path) self.pooling = pooling self.normalize = normalize def forward(self, **kwargs): model_output = self.model(**kwargs) embeddings = self.mean_pooling(model_output, kwargs['attention_mask']) if self.normalize: embeddings = F.normalize(embeddings, p=2, dim=1) return embeddings def mean_pooling(self, model_output, attention_mask): token_embeddings = model_output[0] # First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) def save_pretrained(self, output_path): self.model.save_pretrained(output_path) class EmbeddingTrainer(Trainer): def _save(self, output_dir: Optional[str] = None, state_dict=None): # If we are executing this function, we are the process zero, so we don't check for that. output_dir = output_dir if output_dir is not None else self.args.output_dir os.makedirs(output_dir, exist_ok=True) logger.info(f"Saving model checkpoint to {output_dir}") self.model.save_pretrained(output_dir) if self.tokenizer is not None: self.tokenizer.save_pretrained(output_dir) # Good practice: save your training arguments together with the trained model torch.save(self.args, os.path.join(output_dir, TRAINING_ARGS_NAME)) def compute_loss(self, model, inputs, return_outputs=False): all_embeddings = {} for k in ['question', 'answer']: all_embeddings[k] = model( input_ids=inputs[k]['input_ids'], attention_mask=inputs[k]['attention_mask'], token_type_ids=inputs[k]['token_type_ids'], ) embeddings_query = all_embeddings['question'] embeddings_pos = all_embeddings['answer'] scores = embeddings_query @ embeddings_pos.T labels = torch.arange(0, embeddings_query.shape[0], dtype=torch.long, device=embeddings_query.device) self.cross_entropy = torch.nn.CrossEntropyLoss(reduction='mean') loss = self.cross_entropy(scores, labels) return loss def main(): parser = HfArgumentParser((ModelArguments, TrainingArguments)) if len(sys.argv) == 2 and sys.argv[1].endswith(".json"): model_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1])) else: model_args, training_args = parser.parse_args_into_dataclasses() model_args: ModelArguments training_args: TrainingArguments if ( os.path.exists(training_args.output_dir) and os.listdir(training_args.output_dir) and training_args.do_train and not training_args.overwrite_output_dir ): raise ValueError( f"Output directory ({training_args.output_dir}) already exists and is not empty. Use --overwrite_output_dir to overcome." ) # Setup logging logging.basicConfig( format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", datefmt="%m/%d/%Y %H:%M:%S", level=logging.INFO if training_args.local_rank in [-1, 0] else logging.WARN, ) set_seed(training_args.seed) tokenizer = AutoTokenizer.from_pretrained( model_args.model_name_or_path, cache_dir=model_args.cache_dir ) model = AutoModelForSentenceEmbedding( model_args.model_name_or_path, pooling=model_args.pooling, normalize=model_args.normalize, ) from datasets import load_dataset wq = load_dataset('wiki_qa', split='train') train_dataset = wq.remove_columns('label') data_collator = QPCollator(tokenizer=tokenizer) torch.autograd.set_detect_anomaly(True) trainer = EmbeddingTrainer( model=model, args=training_args, train_dataset=train_dataset, data_collator=data_collator, tokenizer=tokenizer, ) trainer.train() if __name__ == "__main__": main() ``` ### Expected behavior Currently there is no problem on single gpu. I want this code to run normally on multi-gpus. But it seems somewhere is broken... It's hard to find where the problem is cause I'm not super familar with how pytorch/trainer/bertmodel works in distributed manner... Could you help me? Thanks!
[ 24 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0 ]
[ "trainer" ]
https://api.github.com/repos/huggingface/transformers/issues/24235
TITLE Add SPTSv2 COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### Model description SPTSv2 is the latest SOTA text spotting model from Bytedance. Given that we already support DETR, should be a breeze to support this model as well. SPTSv2 is an improvement over the first version: https://github.com/shannanyinxiang/SPTS. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation https://github.com/bytedance/SPTSv2
[ 20 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ "New model" ]
https://api.github.com/repos/huggingface/transformers/issues/23864
TITLE Mychatter COMMENTS 0 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### Model description I am still learning so the content is unclear even to me ### Open source status - [ ] The model implementation is available - [ ] The model weights are available ### Provide useful links for the implementation _No response_
[ 20 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ "New model" ]
https://api.github.com/repos/huggingface/transformers/issues/23668
TITLE Bump requests from 2.22.0 to 2.31.0 in /examples/research_projects/lxmert COMMENTS 2 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY [//]: # (dependabot-start) ⚠️ **Dependabot is rebasing this PR** ⚠️ Rebasing might not happen immediately, so don't worry if this takes some time. Note: if you make any changes to this PR yourself, they will take precedence over the rebase. --- [//]: # (dependabot-end) Bumps [requests](https://github.com/psf/requests) from 2.22.0 to 2.31.0. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/psf/requests/releases">requests's releases</a>.</em></p> <blockquote> <h2>v2.31.0</h2> <h2>2.31.0 (2023-05-22)</h2> <p><strong>Security</strong></p> <ul> <li> <p>Versions of Requests between v2.3.0 and v2.30.0 are vulnerable to potential forwarding of <code>Proxy-Authorization</code> headers to destination servers when following HTTPS redirects.</p> <p>When proxies are defined with user info (<a href="https://user:pass@proxy:8080">https://user:pass@proxy:8080</a>), Requests will construct a <code>Proxy-Authorization</code> header that is attached to the request to authenticate with the proxy.</p> <p>In cases where Requests receives a redirect response, it previously reattached the <code>Proxy-Authorization</code> header incorrectly, resulting in the value being sent through the tunneled connection to the destination server. Users who rely on defining their proxy credentials in the URL are <em>strongly</em> encouraged to upgrade to Requests 2.31.0+ to prevent unintentional leakage and rotate their proxy credentials once the change has been fully deployed.</p> <p>Users who do not use a proxy or do not supply their proxy credentials through the user information portion of their proxy URL are not subject to this vulnerability.</p> <p>Full details can be read in our <a href="https://github.com/psf/requests/security/advisories/GHSA-j8r2-6x86-q33q">Github Security Advisory</a> and <a href="https://nvd.nist.gov/vuln/detail/CVE-2023-32681">CVE-2023-32681</a>.</p> </li> </ul> <h2>v2.30.0</h2> <h2>2.30.0 (2023-05-03)</h2> <p><strong>Dependencies</strong></p> <ul> <li> <p>⚠️ Added support for urllib3 2.0. ⚠️</p> <p>This may contain minor breaking changes so we advise careful testing and reviewing <a href="https://urllib3.readthedocs.io/en/latest/v2-migration-guide.html">https://urllib3.readthedocs.io/en/latest/v2-migration-guide.html</a> prior to upgrading.</p> <p>Users who wish to stay on urllib3 1.x can pin to <code>urllib3&lt;2</code>.</p> </li> </ul> <h2>v2.29.0</h2> <h2>2.29.0 (2023-04-26)</h2> <p><strong>Improvements</strong></p> <ul> <li>Requests now defers chunked requests to the urllib3 implementation to improve standardization. (<a href="https://redirect.github.com/psf/requests/issues/6226">#6226</a>)</li> <li>Requests relaxes header component requirements to support bytes/str subclasses. (<a href="https://redirect.github.com/psf/requests/issues/6356">#6356</a>)</li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/psf/requests/blob/main/HISTORY.md">requests's changelog</a>.</em></p> <blockquote> <h2>2.31.0 (2023-05-22)</h2> <p><strong>Security</strong></p> <ul> <li> <p>Versions of Requests between v2.3.0 and v2.30.0 are vulnerable to potential forwarding of <code>Proxy-Authorization</code> headers to destination servers when following HTTPS redirects.</p> <p>When proxies are defined with user info (<a href="https://user:pass@proxy:8080">https://user:pass@proxy:8080</a>), Requests will construct a <code>Proxy-Authorization</code> header that is attached to the request to authenticate with the proxy.</p> <p>In cases where Requests receives a redirect response, it previously reattached the <code>Proxy-Authorization</code> header incorrectly, resulting in the value being sent through the tunneled connection to the destination server. Users who rely on defining their proxy credentials in the URL are <em>strongly</em> encouraged to upgrade to Requests 2.31.0+ to prevent unintentional leakage and rotate their proxy credentials once the change has been fully deployed.</p> <p>Users who do not use a proxy or do not supply their proxy credentials through the user information portion of their proxy URL are not subject to this vulnerability.</p> <p>Full details can be read in our <a href="https://github.com/psf/requests/security/advisories/GHSA-j8r2-6x86-q33q">Github Security Advisory</a> and <a href="https://nvd.nist.gov/vuln/detail/CVE-2023-32681">CVE-2023-32681</a>.</p> </li> </ul> <h2>2.30.0 (2023-05-03)</h2> <p><strong>Dependencies</strong></p> <ul> <li> <p>⚠️ Added support for urllib3 2.0. ⚠️</p> <p>This may contain minor breaking changes so we advise careful testing and reviewing <a href="https://urllib3.readthedocs.io/en/latest/v2-migration-guide.html">https://urllib3.readthedocs.io/en/latest/v2-migration-guide.html</a> prior to upgrading.</p> <p>Users who wish to stay on urllib3 1.x can pin to <code>urllib3&lt;2</code>.</p> </li> </ul> <h2>2.29.0 (2023-04-26)</h2> <p><strong>Improvements</strong></p> <ul> <li>Requests now defers chunked requests to the urllib3 implementation to improve standardization. (<a href="https://redirect.github.com/psf/requests/issues/6226">#6226</a>)</li> <li>Requests relaxes header component requirements to support bytes/str subclasses. (<a href="https://redirect.github.com/psf/requests/issues/6356">#6356</a>)</li> </ul> <h2>2.28.2 (2023-01-12)</h2> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/psf/requests/commit/147c8511ddbfa5e8f71bbf5c18ede0c4ceb3bba4"><code>147c851</code></a> v2.31.0</li> <li><a href="https://github.com/psf/requests/commit/74ea7cf7a6a27a4eeb2ae24e162bcc942a6706d5"><code>74ea7cf</code></a> Merge pull request from GHSA-j8r2-6x86-q33q</li> <li><a href="https://github.com/psf/requests/commit/302225334678490ec66b3614a9dddb8a02c5f4fe"><code>3022253</code></a> test on pypy 3.8 and pypy 3.9 on windows and macos (<a href="https://redirect.github.com/psf/requests/issues/6424">#6424</a>)</li> <li><a href="https://github.com/psf/requests/commit/b639e66c816514e40604d46f0088fbceec1a5149"><code>b639e66</code></a> test on py3.12 (<a href="https://redirect.github.com/psf/requests/issues/6448">#6448</a>)</li> <li><a href="https://github.com/psf/requests/commit/d3d504436ef0c2ac7ec8af13738b04dcc8c694be"><code>d3d5044</code></a> Fixed a small typo (<a href="https://redirect.github.com/psf/requests/issues/6452">#6452</a>)</li> <li><a href="https://github.com/psf/requests/commit/2ad18e0e10e7d7ecd5384c378f25ec8821a10a29"><code>2ad18e0</code></a> v2.30.0</li> <li><a href="https://github.com/psf/requests/commit/f2629e9e3c7ce3c3c8c025bcd8db551101cbc773"><code>f2629e9</code></a> Remove strict parameter (<a href="https://redirect.github.com/psf/requests/issues/6434">#6434</a>)</li> <li><a href="https://github.com/psf/requests/commit/87d63de8739263bbe17034fba2285c79780da7e8"><code>87d63de</code></a> v2.29.0</li> <li><a href="https://github.com/psf/requests/commit/51716c4ef390136b0d4b800ec7665dd5503e64fc"><code>51716c4</code></a> enable the warnings plugin (<a href="https://redirect.github.com/psf/requests/issues/6416">#6416</a>)</li> <li><a href="https://github.com/psf/requests/commit/a7da1ab3498b10ec3a3582244c94b2845f8a8e71"><code>a7da1ab</code></a> try on ubuntu 22.04 (<a href="https://redirect.github.com/psf/requests/issues/6418">#6418</a>)</li> <li>Additional commits viewable in <a href="https://github.com/psf/requests/compare/v2.22.0...v2.31.0">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=requests&package-manager=pip&previous-version=2.22.0&new-version=2.31.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
[ 21 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ "dependencies" ]
https://api.github.com/repos/huggingface/transformers/issues/22877
TITLE Llama fast tokenizer `train_new_from_iterator` returns `TypeError: 'NoneType' object is not subscriptable` COMMENTS 4 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info accelerate==0.18.0 aiohttp==3.8.4 aiosignal==1.3.1 anyio==3.6.2 argon2-cffi==21.3.0 argon2-cffi-bindings==21.2.0 arrow==1.2.3 asttokens==2.2.1 async-timeout==4.0.2 attrs==23.1.0 backcall==0.2.0 beautifulsoup4==4.12.2 bitsandbytes==0.38.1 bleach==6.0.0 certifi==2022.12.7 cffi==1.15.1 charset-normalizer==3.1.0 cmake==3.26.3 comm==0.1.3 datasets==2.11.0 debugpy==1.6.7 decorator==5.1.1 defusedxml==0.7.1 dill==0.3.6 evaluate==0.4.0 executing==1.2.0 fastjsonschema==2.16.3 filelock==3.12.0 fqdn==1.5.1 frozenlist==1.3.3 fsspec==2023.4.0 huggingface-hub==0.13.4 idna==3.4 importlib-metadata==6.5.0 importlib-resources==5.12.0 ipykernel==6.22.0 ipython==8.12.0 ipython-genutils==0.2.0 isoduration==20.11.0 jedi==0.18.2 Jinja2==3.1.2 jsonpointer==2.3 jsonschema==4.17.3 jupyter-events==0.6.3 jupyter_client==8.2.0 jupyter_core==5.3.0 jupyter_server==2.5.0 jupyter_server_terminals==0.4.4 jupyterlab-pygments==0.2.2 lit==16.0.1 MarkupSafe==2.1.2 matplotlib-inline==0.1.6 mistune==2.0.5 mpmath==1.3.0 multidict==6.0.4 multiprocess==0.70.14 nbclassic==0.5.5 nbclient==0.7.3 nbconvert==7.3.1 nbformat==5.8.0 nest-asyncio==1.5.6 networkx==3.1 notebook==6.5.4 notebook_shim==0.2.2 numpy==1.24.2 nvidia-cublas-cu11==11.10.3.66 nvidia-cuda-cupti-cu11==11.7.101 nvidia-cuda-nvrtc-cu11==11.7.99 nvidia-cuda-runtime-cu11==11.7.99 nvidia-cudnn-cu11==8.5.0.96 nvidia-cufft-cu11==10.9.0.58 nvidia-curand-cu11==10.2.10.91 nvidia-cusolver-cu11==11.4.0.1 nvidia-cusparse-cu11==11.7.4.91 nvidia-nccl-cu11==2.14.3 nvidia-nvtx-cu11==11.7.91 packaging==23.1 pandas==2.0.0 pandocfilters==1.5.0 parso==0.8.3 pexpect==4.8.0 pickleshare==0.7.5 pkgutil_resolve_name==1.3.10 platformdirs==3.2.0 prometheus-client==0.16.0 prompt-toolkit==3.0.38 protobuf==3.20.0 psutil==5.9.5 ptyprocess==0.7.0 pure-eval==0.2.2 pyarrow==11.0.0 pycparser==2.21 Pygments==2.15.1 pyrsistent==0.19.3 python-dateutil==2.8.2 python-dotenv==1.0.0 python-json-logger==2.0.7 pytz==2023.3 PyYAML==6.0 pyzmq==25.0.2 regex==2023.3.23 requests==2.28.2 responses==0.18.0 rfc3339-validator==0.1.4 rfc3986-validator==0.1.1 Send2Trash==1.8.0 sentencepiece==0.1.98 six==1.16.0 sniffio==1.3.0 soupsieve==2.4.1 stack-data==0.6.2 sympy==1.11.1 terminado==0.17.1 tinycss2==1.2.1 tokenizers==0.13.3 torch==2.0.0 tornado==6.3 tqdm==4.65.0 traitlets==5.9.0 -e git+https://github.com/huggingface/transformers.git@474bf508dfe0d46fc38585a1bb793e5ba74fddfd#egg=transformers triton==2.0.0 typing_extensions==4.5.0 tzdata==2023.3 uri-template==1.2.0 urllib3==1.26.15 wcwidth==0.2.6 webcolors==1.13 webencodings==0.5.1 websocket-client==1.5.1 xxhash==3.2.0 yarl==1.8.2 zipp==3.15.0 ### Who can help? @ArthurZucker , @Narsil ### Information - [] The official example scripts - [X ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. Convert llama weights to hf format ``` python src/transformers/models/llama/convert_llama_weights_to_hf.py \ --input_dir /path/to/downloaded/llama/weights --model_size tokenizer_only --output_dir /output/path ``` 2. Train new tokenizer from old. ``` from transformers import AutoTokenizer old_tokenizer = AutoTokenizer.from_pretrained(/output/path) old_tokenizer.train_new_from_iterator(["I love huggingface!"], 50) ``` ### Expected behavior ## Behavior I ran into the error: ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[3], line 5 3 old_tokenizer = AutoTokenizer.from_pretrained(PATH_TO_LLAMA_DIR,) ----> 5 old_tokenizer.train_new_from_iterator(["I love huggingface!"], 50) File ~/transformers/src/transformers/tokenization_utils_fast.py:709, in PreTrainedTokenizerFast.train_new_from_iterator(self, text_iterator, vocab_size, length, new_special_tokens, special_tokens_map, **kwargs) [707](file:///home/jovyan/transformers/src/transformers/tokenization_utils_fast.py?line=706) if tokenizer_json["model"]["type"] == "Unigram" and unk_token is not None: [708](file:///home/jovyan/transformers/src/transformers/tokenization_utils_fast.py?line=707) kwargs["unk_token"] = unk_token --> [709](file:///home/jovyan/transformers/src/transformers/tokenization_utils_fast.py?line=708) if tokenizer_json["pre_tokenizer"]["type"] == "ByteLevel": [710](file:///home/jovyan/transformers/src/transformers/tokenization_utils_fast.py?line=709) kwargs["initial_alphabet"] = pre_tokenizers_fast.ByteLevel.alphabet() [712](file:///home/jovyan/transformers/src/transformers/tokenization_utils_fast.py?line=711) trainer_class = MODEL_TO_TRAINER_MAPPING[tokenizer_json["model"]["type"]] TypeError: 'NoneType' object is not subscriptable ``` ## Analysis Inspecting my `tokenizer.json` file ([tokenizer.zip](https://github.com/huggingface/transformers/files/11279412/tokenizer.zip)), I realised my `"pre_tokenizer": null,` which led to the error. I'm not sure if it helps, but I had issue converting the llama weights to hf format (step 1) due to the protobuf version bug described [here](https://github.com/huggingface/transformers/issues/21128). I fixed it by downgrading my protobuf to version 3.20.
[ 22 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 ]
[ "Core: Tokenization" ]
https://api.github.com/repos/huggingface/transformers/issues/24392
TITLE Allow `TextClassificationPipeline` to handle input longer than `model_max_length` tokens COMMENTS 2 REACTIONS +1: 2 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### Feature request We should add "chunking"/"sliding window" functionality to `TextClassificationPipeline`, allowing it to process documents longer than the `model_max_length` of its `.model`. Specifically, this would run an instance of the model on each of several "sliding window" views of each input sequence, then take the mean, similar to (but somewhat simpler than) how [`TokenClassificationPipeline`](https://github.com/huggingface/transformers/blob/ad78d9597b224443e9fe65a94acc8c0bc48cd039/src/transformers/pipelines/token_classification.py#L96) does so in part by subclassing from `ChunkPipeline`. ### Motivation It would be nice to easily do, e.g., sentiment analysis on documents longer than the `model_max_length` of the given model/tokenizer. I have in the past tried to do this in a time-sensitive context and was unable to do so. ### Your contribution I have already opened a draft PR: #24312. I would be happy to finish the missing parts (e.g. documentation) if someone on the Huggingface team (I believe @Narsil is the appropriate person to tag) can confirm that they would accept this feature as I plan to implement it.
[ 19 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Feature request" ]
https://api.github.com/repos/huggingface/transformers/issues/22003
TITLE Add X-Decoder Model COMMENTS 4 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### Model description X-Decoder is a generalized decoding pipeline that can predict pixel-level segmentation and language tokens seamlessly. X-Decoder is the first work that provides a unified way to support all types of image segmentation and a variety of vision-language (VL) tasks. The model exhibits strong transferability to a wide range of downstream tasks in both zero-shot and fine-tuning settings, achieving state-of-the-art open-vocabulary segmentation and referring segmentation on 10 settings of 7 datasets and should be a valuable addition to transformers library ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation Paper: https://arxiv.org/pdf/2212.11270.pdf Code: https://github.com/microsoft/X-Decoder Weights: https://huggingface.co/spaces/xdecoder/Demo/blob/main/xdecoder_focalt_last.pt Author: @eltociear Cc: @NielsRogge @alaradirik
[ 20 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ "New model" ]
https://api.github.com/repos/huggingface/transformers/issues/23373
TITLE Update error message when Accelerate isn't installed COMMENTS 2 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY # What does this PR do? This PR provides a bit more verbose error when `accelerate` isn't found on an install of `transformers`, as the `Trainer` (on PyTorch) requires Accelerate to be installed. The error message was changed from: ```python ImportError: Using the Trainer with PyTorch requires accelerate: Run pip install --upgrade accelerate ``` To be: ```python Using the `Trainer` with `PyTorch` requires `accelerate>=0.19.0`: Please run `pip install transformers[torch]` or `pip install accelerate -U` ``` Fixes # (issue) - https://github.com/huggingface/transformers/issues/23323 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @LysandreJik (@sgugger when you are back)
[ 24 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0 ]
[ "trainer" ]
https://api.github.com/repos/huggingface/transformers/issues/22371
TITLE Conv1D doesn't output token-wise results consistently. COMMENTS 2 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info Hi, I recently observed from huggingface's GPT2 that (1) the output (logits y1, ..., yN) from using a sequence with N tokens (say x1, ..., xN) (2) the output (logits z1, ..., zM) from using the earlier part of the above sequence (say x1, ..., xM) are not perfectly matched (y1!=z1,..., yM!=zM) during inference (so when causal mask is applied). I tried to figure out why this happened and realized that this is related to how `Conv1D`'s `forward` module is implemented: https://github.com/huggingface/transformers/blob/main/src/transformers/pytorch_utils.py#L100-L104 Thing is, we internally use `addmm` (say b + [x1, ..., xN]*W), which doesn't give you consistent row-wise outputs (say b + [x1, ..., xM]*W) although they should be the same theoretically. I generated an example and proposed a way to resolve the issue below: ```python import torch torch.manual_seed(0) torch.cuda.manual_seed(0) input_dim = 786 feature_dim = 2304 x1 = torch.randn((1, 38, input_dim), device='cuda') # (B, N, Fi) where N is the number of tokens in a sequence. x2 = x1[:, :10] # (B, M, Fi) where M=10 is to gather the early M tokens from the sequence. b = torch.randn((feature_dim,), device='cuda') # biases w = torch.randn((input_dim, feature_dim), device='cuda') # weights def addmm(x, b, w): x = x.view(-1, x.size(-1)) return torch.addmm(b, x, w) def addbmm(x, b, w): # (B, N, Fi), (Fi, Fh), (Fh) batch_size, seq_len = x.size(0), x.size(1) # B, N x = x.view(batch_size * seq_len, 1, x.size(-1)) # (B * N, 1, Fi) # (1, Fi, Fh).expand ( (B * N, Fi, Fh) ) --> (B * N, Fi, Fh) w = w.unsqueeze(0).expand((batch_size * seq_len,) + w.size()) return torch.matmul(x, w).add(b).view(batch_size * seq_len, -1) # (B * N, -1) print("result (addmm):\n", addmm(x1, b, w)[:10] == addmm(x2, b, w)) print("result (addbmm):\n", addbmm(x1, b, w)[:10] == addbmm(x2, b, w)) ``` The 1st function `addmm` is the one from huggingface's `Conv1D`, and the 2nd function `addbmm` is what I implemented to avoid numerical error. For the printend outputs, we ideally have to get `True` values always, but this is not the case of `addmm`. ```bash result (addmm): tensor([[False, False, False, ..., False, True, True], [ True, True, False, ..., False, False, True], [False, False, False, ..., False, False, False], ..., [False, False, False, ..., False, False, False], [False, False, False, ..., False, False, False], [False, False, True, ..., False, False, False]], device='cuda:0') result (addbmm): tensor([[True, True, True, ..., True, True, True], [True, True, True, ..., True, True, True], [True, True, True, ..., True, True, True], ..., [True, True, True, ..., True, True, True], [True, True, True, ..., True, True, True], [True, True, True, ..., True, True, True]], device='cuda:0') ``` Intuitively, I enforced batched matmul computation by explicitly creating a batch dimension for tensors, which leads to explicit row-wise computations and ends up with ideal results. Thus, I think `forward()` part of `Conv1D` (https://github.com/huggingface/transformers/blob/main/src/transformers/pytorch_utils.py#L100-L104) should be updated as ```python def forward(self, x): size_out = x.size()[:-1] + (self.nf,) x = x.view(x.size()[:-1].numel(), 1, x.size(-1)) weight = self.weight.unsqueeze(0).expand((x.size()[:-1].numel(),) + w.size()) x = torch.matmul(x, weight).add(self.bias) x = x.view(size_out) return x ``` ### Who can help? @ArthurZucker @younesbelkada ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I provided an example above. ### Expected behavior After fixing the bug, the earlier partial logit outputs shouldn't be affected by the future tokens.
[ 13, 5 ]
[ 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "wontfix", "Core: Modeling" ]
https://api.github.com/repos/huggingface/transformers/issues/22869
TITLE Fixup multigpu local_rank COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY # What does this PR do? The `local_rank` wasn't being properly set when using the `PartialState`, causing failures on the nightlies. This PR fixes it. Fixes # (issue) Failing nightly tests ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
[ 16, 27 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1 ]
[ "Distributed Training / Models", "bug" ]
https://api.github.com/repos/huggingface/transformers/issues/23773
TITLE Implement DINO V2 COMMENTS 4 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### Model description Code and model is available here: https://github.com/facebookresearch/dinov2 Full paper here: https://arxiv.org/abs/2304.07193 The implementation seems fairly simple. Most layers is already implemented within transformers library (it's just a ViT). There's some changes compared to DINO (which is implemented already), such as SwiGLU and LayerScale. According to #20403, SwiGLU is already implemented, though, the original code uses xformers's SwiGLU. DINO V2 also have a different license as listed here: https://github.com/facebookresearch/dinov2/blob/main/LICENSE It is NonCommercial. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation _No response_ If there's no issue with license, I can make a PR for the model.
[ 20 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ "New model" ]
https://api.github.com/repos/huggingface/transformers/issues/24316
TITLE [Tokenizer] `skip_special_tokens` not working as expected COMMENTS 0 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY # Reporting a failing API design This is mostly to help me record some of the biggest issues with the current API for adding tokens. This is linked to #23909. Here is a simple snippet: ```python >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("t5-base", use_fast = False) >>> tokenizer = [ AddedToken("[ABC]", normalized=False), AddedToken("[DEF]", normalized=False), AddedToken("GHI IHG", normalized=False), ] >>> tokenizer.add_tokens(new_toks) >>> tokenizer.add_tokens([AddedToken("[SAMPLE]", normalized=True)], special_tokens = True) >>> print(tokenizer.added_tokens_encoder) >>> print( tokenizer.all_special_ids) ``` This will show that the newly added token (`[SAMPLE]`) is not part of the `all_special_ids`. However, `all_special_ids` is used when decoding, to check whether the token should be skipped or not: ```python for token in filtered_tokens: if skip_special_tokens and token in self.all_special_ids: continue if token in self.added_tokens_encoder: if current_sub_text: sub_texts.append(self.convert_tokens_to_string(current_sub_text)) current_sub_text = [] sub_texts.append(token) else: current_sub_text.append(token) ``` Thus ```python >>> encoded = tokenizer.encode("[ABC] [DEF][SAMPLE]", add_special_tokens=False) >>> tokenizer.decode(encoded, skip_special_tokens = True) "[ABC] [DEF][SAMPLE]" ``` However, the token is in `added_tokens_encoder` but not in `additional_special_tokens`. Now imagine you want `spaces_between_special_tokens` ? This will add spaces between all added tokens, and thus checks if a token is part of `tokenzier.added_tokens_encoder`. ```python >>> encoded = tokenizer.encode("[ABC] [DEF][SAMPLE]", add_special_tokens=False) >>> tokenizer.decode(encoded, spaces_between_special_tokens = True) "[ABC] [DEF] [SAMPLE]" >>> tokenizer.decode(encoded, spaces_between_special_tokens = False) "[ABC][DEF][SAMPLE]" ```
[ 22 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 ]
[ "Core: Tokenization" ]
https://api.github.com/repos/huggingface/transformers/issues/22290
TITLE Native support of ChatGLM-6b COMMENTS 4 REACTIONS +1: 8 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### Feature request Support https://huggingface.co/THUDM/chatglm-6b (and its int4 variants) in the Transformers library instead of relying on remote code execution. ### Motivation This model performs really well (despite being a small model compared to large ones) and got a LOT of attention recently. It might be the SD moment for LLM IMO as it runs perfectly on consumer GPUs. It would be great if Transformers can have native support for this model, instead of relying on remote code execution. A native integration will also make it much easier to use the model on Inference API / Endpoints. ### Your contribution cc @sgugger @osanseviero
[ 20, 19 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ "New model", "Feature request" ]
https://api.github.com/repos/huggingface/transformers/issues/24727
TITLE Add "save_best_only" parameter in "transformers.PushToHubCallback" class COMMENTS 3 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### Feature request When utilizing Keras callbacks, we have the ability to specify when the model should be saved during training. The **transformers.PushToHubCallback()** class already incorporates similar functionality through the use of the **"save_strategy"** parameter. This parameter accepts the following values: - "no": Saving is performed at the conclusion of training. - "epoch": Saving is performed at the end of each epoch. - "steps": Saving is performed every "save_steps" interval. However, these options do not take into account accuracy (or any other specified metric) improvement. In contrast, the Keras callback provides the **"save_best_only"** parameter, which exclusively saves the model when there is an enhancement in accuracy or the specified metric. The code snippet below demonstrates its usage: ``` #Define the callback callbacks = [ keras.callbacks.ModelCheckpoint( filepath="directory_name/model.keras", monitor="val_loss", save_best_only=True, ) ] #Start the training history = model.fit( train_dataset, epochs=5, validation_data=validation_dataset, monitor="val_loss", callbacks=callbacks) ``` The model mentioned above will undergo training for a total of 5 epochs. However, the model will only be saved when there is an improvement in the "validation loss" metric. **transformers.PushToHubCallback()** class must incorporate this feature as well. ### Motivation This feature is indeed quite valuable, and it is readily accessible through [Keras callbacks](https://keras.io/api/callbacks/model_checkpoint/#:~:text=save_best_only%3A%20if%20save_best_only%3DTrue%20%2C,by%20each%20new%20better%20model.). By utilizing this feature, significant processing power and bandwidth can be saved, particularly when dealing with large transformers models. It ensures that only the best-performing models, based on the specified metric (such as validation loss), are saved, resulting in more efficient storage and reduced computational resources. ### Your contribution This [source code](https://github.com/keras-team/keras/blob/v2.12.0/keras/callbacks.py) can be helpful.
[ 19 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Feature request" ]
https://api.github.com/repos/huggingface/transformers/issues/22829
TITLE Add CLIP-ViP COMMENTS 7 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### Model description [CLIP-ViP](https://github.com/microsoft/XPretrain/tree/main/CLIP-ViP) is a video-language model which is based on a pre-trained image-text model [CLIP](https://openai.com/blog/clip/) then further pre-trained (post-pretraining) on a large-scale video-text dataset [HD-VILA-100M](https://github.com/microsoft/XPretrain/tree/main/hd-vila-100m). This work is accepted by ICLR 2023. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation [The official implementation](https://github.com/microsoft/XPretrain/tree/main/CLIP-ViP) This repo has model implementation and pre-trained weights. @hellwayxue
[ 20 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ "New model" ]
https://api.github.com/repos/huggingface/transformers/issues/22378
TITLE [performance] ensure `causal_mask` is created directly on device COMMENTS 9 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 5 rocket: 0 eyes: 0 BODY # What does this PR do? @tjruwase and @tohtana discovered that causal_mask is currently being created on CPU then moved to GPU during the forward pass of OPT (and we think other models). This appears to be causing a significant performance degradation on multi-gpu environments due to parallel host to device copies going on. It's not 100% clear to us why this is so bad but here is what we observe before and after this patch: Before this patch w. OPT-125m on x8 A100s: <img width="649" alt="image" src="https://user-images.githubusercontent.com/645595/227668447-bf6840dd-bbc4-4520-8a9f-33f046eeb4c2.png"> After the patch: <img width="628" alt="image" src="https://user-images.githubusercontent.com/645595/227668475-6ed2f1ca-d18a-4776-862d-4be499f62f39.png"> These numbers were gathered from a modified version of https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm.py but turning on `wall_clock_breakdown: true` in our deepspeed config. One major complication we see in accepting this PR is that the two functions being modified are copied across lots of different models and the `make fix-copies` script doesn't seem to address all of them correctly across both `_make_causal_mask` and `_prepare_decoder_attention_mask` ## Who can review? Tagging @sgugger and @stas00 to help triage to the right people
[ 3 ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Performance" ]
https://api.github.com/repos/huggingface/transformers/issues/23923
TITLE Adding support for 3D deep learning models. COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### Feature request Hi. I am planning to add a new pipeline and a model for 3d deep learning tasks which can work on point clouds for classification and detection as there is no support for 3d data right now. I just wanted to confirm if the process will be similar to the guides for adding a new pipeline and model to hugging face transformers or there are more complexities which I have not thought about? And is it going to be too much work to add GPU support and batching? ### Motivation I have been working with 3d deep learning and wanted to implement the whole process from scratch. So, why not contribute to hugging face so other people can use and build upon it? ### Your contribution Submitting a PR
[ 19 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Feature request" ]
https://api.github.com/repos/huggingface/transformers/issues/23880
TITLE Whisper with Elastic Weight Consolidation COMMENTS 5 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### Feature request After specific language finetuning for Whisper, its ASR recognition performance in previous languages deteriorates, known as catastrophic forgetting. Therefore, something, such as the EWC, needs to be used to overcome this problem. Here is the paper of EWC: _https://arxiv.org/pdf/1612.00796.pdf_ ### Motivation When I fine-tuned Whisper large-v2 with 10 hours of Af language data, its WER in languages like Be and Is dropped to close to 90%. However, the WER of these languages under the pre-fine-tuning model is around 40%. So I hope to use the EWC to overcome or mitigate this problem. ### Your contribution I believe in the professional ability of Hugging Face team, and I can provide data support for it.
[ 19 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Feature request" ]
https://api.github.com/repos/huggingface/transformers/issues/25098
TITLE Bump certifi from 2022.12.7 to 2023.7.22 in /examples/research_projects/decision_transformer COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY Bumps [certifi](https://github.com/certifi/python-certifi) from 2022.12.7 to 2023.7.22. <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/certifi/python-certifi/commit/8fb96ed81f71e7097ed11bc4d9b19afd7ea5c909"><code>8fb96ed</code></a> 2023.07.22</li> <li><a href="https://github.com/certifi/python-certifi/commit/afe77220e0eaa722593fc5d294213ff5275d1b40"><code>afe7722</code></a> Bump actions/setup-python from 4.6.1 to 4.7.0 (<a href="https://redirect.github.com/certifi/python-certifi/issues/230">#230</a>)</li> <li><a href="https://github.com/certifi/python-certifi/commit/2038739ad56abec7aaddfa90ad2ce6b3ed7f5c7b"><code>2038739</code></a> Bump dessant/lock-threads from 3.0.0 to 4.0.1 (<a href="https://redirect.github.com/certifi/python-certifi/issues/229">#229</a>)</li> <li><a href="https://github.com/certifi/python-certifi/commit/44df761f4c09d19f32b3cc09208a739043a5e25b"><code>44df761</code></a> Hash pin Actions and enable dependabot (<a href="https://redirect.github.com/certifi/python-certifi/issues/228">#228</a>)</li> <li><a href="https://github.com/certifi/python-certifi/commit/8b3d7bae85bbc87c9181cc1d39548db3d31627f0"><code>8b3d7ba</code></a> 2023.05.07</li> <li><a href="https://github.com/certifi/python-certifi/commit/53da2405b1af430f6bafa21ba45d8dd8dfc726b8"><code>53da240</code></a> ci: Add Python 3.12-dev to the testing (<a href="https://redirect.github.com/certifi/python-certifi/issues/224">#224</a>)</li> <li><a href="https://github.com/certifi/python-certifi/commit/c2fc3b1f64d6946f1057971ee897ea828ae848d8"><code>c2fc3b1</code></a> Create a Security Policy (<a href="https://redirect.github.com/certifi/python-certifi/issues/222">#222</a>)</li> <li><a href="https://github.com/certifi/python-certifi/commit/c211ef482a01aff5f1bc92c4128bfa0c955f4a01"><code>c211ef4</code></a> Set up permissions to github workflows (<a href="https://redirect.github.com/certifi/python-certifi/issues/218">#218</a>)</li> <li><a href="https://github.com/certifi/python-certifi/commit/2087de5d0aa1d472145fc1dbdfece3fe652bbac5"><code>2087de5</code></a> Don't let deprecation warning fail CI (<a href="https://redirect.github.com/certifi/python-certifi/issues/219">#219</a>)</li> <li><a href="https://github.com/certifi/python-certifi/commit/e0b9fc5c8f52ac8c300da502e5760ce3d41429ec"><code>e0b9fc5</code></a> remove paragraphs about 1024-bit roots from README</li> <li>Additional commits viewable in <a href="https://github.com/certifi/python-certifi/compare/2022.12.07...2023.07.22">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=certifi&package-manager=pip&previous-version=2022.12.7&new-version=2023.7.22)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
[ 21 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ "dependencies" ]
https://api.github.com/repos/huggingface/transformers/issues/24840
TITLE Again: RuntimeError: unscale_() has already been called on this optimizer since the last update() COMMENTS 3 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info - `transformers` version: 4.30.2 - Platform: Windows-10-10.0.19045-SP0 - Python version: 3.10.12 - Huggingface_hub version: 0.16.2 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @sgugger ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I've seen the PR where this was supposed to be fixed. Now I think from my experiments the issue still happens when gradient_accumulation_steps is larger than dataset.num_rows divided by per_device_train_batch_size These are obviously not very good input data and it's kinda obvious that it should blow - but this could happen to people if the dataset is too small and gradient_accumulation_steps set arbitrary - but no relevant info is given just a RuntimeError. So it's kinda bug and kinda user error. On my end the issue can be fixed if I lower the gradient_accumulation_steps so it satisfies the above requirement. I had not looked at the transformers code where this gets to play. This is literally based on my own hunch - if I would forget to safeguard the data, I would make an error there. The debug log File "\env\lib\site-packages\transformers\trainer.py", line 1645, in train return inner_training_loop( File "\env\lib\site-packages\transformers\trainer.py", line 1987, in _inner_training_loop self.accelerator.clip_grad_norm_( File "\env\lib\site-packages\accelerate\accelerator.py", line 1893, in clip_grad_norm_ self.unscale_gradients() File "\env\lib\site-packages\accelerate\accelerator.py", line 1856, in unscale_gradients self.scaler.unscale_(opt) File "env\lib\site-packages\torch\cuda\amp\grad_scaler.py", line 275, in unscale_ raise RuntimeError("unscale_() has already been called on this optimizer since the last update().") RuntimeError: unscale_() has already been called on this optimizer since the last update(). ### Expected behavior safe guard this if it is indeed an issue or give error that tells you why this happens (too high gradient_accumulation_steps for the amount of data) If this is wrong place to bring this, let me know.
[ 25 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0 ]
[ "solved" ]
https://api.github.com/repos/huggingface/transformers/issues/22413
TITLE Add interpolation of position encodings to BLIP-2 COMMENTS 4 REACTIONS +1: 1 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### Feature request ViT implemented in Huggingface Transformers has the feature to enable finetuning with different resolution of images https://huggingface.co/docs/transformers/model_doc/vit#transformers.ViTModel.forward.interpolate_pos_encoding while the newly implemented BLIP-2 model does not. Would like to add those following the ViT implementation. ### Motivation I was playing around with the model whether different (mainly higher) resolution of input images helps downstream tasks. (Curious to get feedback on whether this feature would be needed or not for the sake of keeping the code simple.) ### Your contribution It's mostly copying & pasting from the ViT implementation `interpolate_pos_encoding` but have a working code ready and ready for PR to get reviewed (and address bugs).
[ 19 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Feature request" ]
https://api.github.com/repos/huggingface/transformers/issues/24179
TITLE Loading a tokenizer from the Tokenizers library doesn't transfer over padding/truncation behavior correctly COMMENTS 5 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info Not especially relevant, but included for completeness: - `transformers` version: 4.29.2 - Platform: macOS-13.2-x86_64-i386-64bit - Python version: 3.10.11 - Huggingface_hub version: 0.14.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.0 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction (n.b.: originally posted a similar query in the [transformers forum](https://discuss.huggingface.co/t/padding-not-working-when-loading-a-tokenizer-trained-via-the-tokenizers-library-into-transformers/42326/1) but got no answer there.) I trained a simple WhitespaceSplit/WordLevel tokenizer using the `tokenizers` library. I added padding by calling `enable_padding(pad_token="<pad>")` on the Tokenizer instance. Then I saved it to a JSON file and then loaded it into transformers using [the instructions here](https://huggingface.co/docs/transformers/fast_tokenizers): ```py fast_tokenizer = PreTrainedTokenizerFast(tokenizer_file="tokenizer.json") ``` When using the `tokenizers.Tokenizer` object directly, `encode` correctly adds the padding tokens. However, if I try padding when tokenizing using the `PreTrainedTokenizerFast` instance, I get the exception: ```py ValueError: Asking to pad but the tokenizer does not have a padding token. Please select a token to use as `pad_token` `(tokenizer.pad_token = tokenizer.eos_token e.g.)` or add a new pad token via `tokenizer.add_special_tokens({'pad_token': '[PAD]'})`. ``` Sure enough, if I follow the instructions and add the pad token as a special token, it works. Alternatively, I can pass the argument `pad_token="<pad>"` to the `PreTrainedTokenizerFast` constructor call, to the same effect. To reproduce the problem, you can use the code below. Most of it is from the [tokenizers Quicktour](https://huggingface.co/docs/tokenizers/quicktour), so you'll need to download the data files as per the instructions there (or modify `files` if using your own files). The rest is from the official transformers docs on [how to load a tokenizer from `tokenizers` into `transformers`](https://huggingface.co/docs/transformers/fast_tokenizers): ```py from tokenizers import BpeTrainer, Tokenizer from tokenizers.models import BPE from tokenizers.pre_tokenizers import Whitespace from transformers import PreTrainedTokenizerFast files = [f"data/wikitext-103-raw/wiki.{split}.raw" for split in ["test", "train", "valid"]] sentences = ["Hello, y'all!", "How are you 😁 ?"] tokenizer = Tokenizer(BPE(unk_token="[UNK]")) trainer = BpeTrainer(special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"]) tokenizer.pre_tokenizer = Whitespace() tokenizer.train(files, trainer) # Enable padding tokenizer.enable_padding(pad_id=3, pad_token="[PAD]") # Now use this tokenizer to tokenize a couple of sentences. output = tokenizer.encode_batch(sentences) # The output is padded, as it should be: print(output[0].tokens) # ['Hello', ',', 'y', "'", 'all', '!'] print(output[1].tokens) # ['How', 'are', 'you', '[UNK]', '?', '[PAD]'] # But now let's say we load the tokenizer into transformers- let's try loading it directly from the tokenizer object: fast_tokenizer = PreTrainedTokenizerFast(tokenizer_object=tokenizer) # Tokenize two strings of different token length with padding fast_output = fast_tokenizer(sentences, padding=True) ``` This gives us the error: ``` Using pad_token, but it is not set yet. Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/apatil/anaconda3/envs/lm-training/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2548, in __call__ encodings = self._call_one(text=text, text_pair=text_pair, **all_kwargs) File "/Users/apatil/anaconda3/envs/lm-training/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2634, in _call_one return self.batch_encode_plus( File "/Users/apatil/anaconda3/envs/lm-training/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2816, in batch_encode_plus padding_strategy, truncation_strategy, max_length, kwargs = self._get_padding_truncation_strategies( File "/Users/apatil/anaconda3/envs/lm-training/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2453, in _get_padding_truncation_strategies raise ValueError( ValueError: Asking to pad but the tokenizer does not have a padding token. Please select a token to use as `pad_token` `(tokenizer.pad_token = tokenizer.eos_token e.g.)` or add a new pad token via `tokenizer.add_special_tokens({'pad_token': '[PAD]'})`. ``` We can resolve the issue by explicitly specifying the special tokens when initializing the `PreTrainedTokenizerFast`: ```py fast_tokenizer = PreTrainedTokenizerFast(tokenizer_object=tokenizer, pad_token="[PAD]", unk_token="[UNK]") # Now padding works as expected fast_output = fast_tokenizer(sentences, padding=True) print(fast_output[0].tokens) # ['Hello', ',', 'y', "'", 'all', '!'] print(fast_output[1].tokens) # ['How', 'are', 'you', '[UNK]', '?', '[PAD]'] ``` The code above uses the `tokenizer_object` parameter to load the fast tokenizer as a `PreTrainedTokenizerFast` instance, but as you can confirm for yourselves, the same behavior occurs if you first save the tokenizer to file, then load it into `PreTrainedTokenizerFast` using the `tokenizer_file` parameter instead. **First, I wanted to check- am I doing something wrong/missing something? Or is this just how it works?** If the latter, as follows, an explanation of how I feel it should work and why. ### Expected behavior I understand that I can get the desired behavior by either: 1. Add the pad token as a special token i.e. `fast_tokenizer.add_special_tokens({'pad_token': '[PAD]'})`. 2. Alternatively, I can pass the argument `pad_token='[PAD]'` to the `PreTrainedTokenizerFast` constructor call, to the same effect. But I want the tokenizer to work *out of the box identically as the `tokenizer.Tokenizer` instance does* (to the extent that is reasonably possible), including in terms of padding behavior I find it confusing and awkward that I have to enable padding for the `tokenizer.Tokenizer` instance, and then *again* for the `PreTrainedTokenizerFast` instance. Imagine if your system architecture/workflow has two entirely different processes for tokenizing a document vs. training a model on it using `transformers` (as I imagine is often the case for people). Then you would need to hardcode the pad token in both locations, and if for some reason you wanted to change it, also update it in both locations. On the other hand, if `PreTrainedTokenizerFast` really behaved exactly like the fast tokenizer it was created from, the training code could be entirely agnostic to how the tokenizer was created. All it would need was a path to the saved tokenizer config, and it could proceed without needing to know anything else. This is the behavior I think most people would naturally expect. It could make sense to keep the `pad_token` parameter in the `PreTrainedTokenizerFast` *as an optional override*, or for cases where the fast tokenizer didn't have a padding token set, but the default should be to copy over the padding behavior as-is. Put another way, the tokenizer object/config file should uniquely determine the tokenization behavior of a tokenizer, whether it is a `tokenizers.Tokenizer` instance or its equivalent `PreTrainedTokenizerFast` (to the extent it can; I understand some misalignment is probably inevitable, but this seems not to be one of those cases). **Bottom line:** If the padding information is already in the tokenizer (or in the saved tokenizer config file), you should not need to explicitly specify the padding token again when transferring the tokenizer. This introduces a lot of totally unnecessary friction and leads to brittle code. The tokenizer object/config should be self-contained (i.e. I should not need to hardcode the pad token in two places), and information already encapsulated in the tokenizer object or its saved config file should be preserved on transfer. EDIT: I later observed that the same behavior is true of truncation. See my followup comment for what I believe to be the responsible section of code.
[ 22 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 ]
[ "Core: Tokenization" ]
https://api.github.com/repos/huggingface/transformers/issues/24643
TITLE "RuntimeError: 'weight' must be 2-D" training with DeepSpeed COMMENTS 6 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info - `transformers` version: 4.30.2 - Platform: Linux-5.19.0-46-generic-x86_64-with-glibc2.35 - Python version: 3.10.11 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @pacman100 @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction The dataset being used is my own dataset that is just a few hundred strings in a CSV file produced by pandas. Running the following code ```Python from transformers import GPTJForCausalLM, AutoTokenizer, Trainer, TrainingArguments, DataCollatorForLanguageModeling import os from torch.utils.data import Dataset import pandas as pd import evaluate import numpy as np import sklearn import torch as nn from transformers.trainer_pt_utils import get_parameter_names model_name = "EleutherAI/gpt-j-6b" d_type = "auto" print("CUDA Available: "+ str(nn.cuda.is_available())) print("CUDA Version: " + str(nn.version.cuda)) print("GPUs Available: "+ str(nn.cuda.device_count())) def process_csv(filename, tknizer): data = pd.read_csv(filename) return tknizer(list(data["text"].values.flatten()), padding=True, truncation=True, return_tensors="pt") tokenizer = AutoTokenizer.from_pretrained(model_name, torch_dtype=d_type) collator = DataCollatorForLanguageModeling(tokenizer, mlm=False) tokenizer.pad_token = tokenizer.eos_token class MyDataset(Dataset): def __init__(self, tokenized_input): self.tokenized_input = tokenized_input def __getitem__(self, idx): return {key: val[idx] for key, val in self.tokenized_input.items()} def __len__(self): return len(self.tokenized_input.input_ids) metric = evaluate.load("accuracy") def compute_metrics(eval_pred): logits, labels = eval_pred predictions = np.argmax(logits, axis=-1) return metric.compute(predictions=predictions, references=labels) train_data = MyDataset(process_csv("train_data.csv", tokenizer)) eval_data = MyDataset(process_csv("test_data.csv", tokenizer)) training_args = TrainingArguments( output_dir="test_trainer", deepspeed="deepSpeedCPU.json", ) model = GPTJForCausalLM.from_pretrained(model_name, torch_dtype=d_type).cuda() print("Total Memory: " + str(nn.cuda.get_device_properties(0).total_memory)) print("Reserved: " + str(nn.cuda.memory_reserved(0))) print("Allocated: " + str(nn.cuda.memory_allocated(0))) trainer = Trainer( model=model, args=training_args, train_dataset=train_data, eval_dataset=eval_data, data_collator=collator, compute_metrics=compute_metrics, ) trainer.train() ``` using the following config file ``` { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto" } }, "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto" } }, "zero_optimization": { "stage": 3, "offload_optimizer": { "device": "cpu", "pin_memory": true }, "offload_param": { "device": "cpu", "pin_memory": true }, "overlap_comm": true, "contiguous_gradients": true, "sub_group_size": 1e9, "reduce_bucket_size": "auto", "stage3_prefetch_bucket_size": "auto", "stage3_param_persistence_threshold": "auto", "stage3_max_live_parameters": 1e9, "stage3_max_reuse_distance": 1e9, "stage3_gather_16bit_weights_on_model_save": true }, "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "steps_per_print": 2000, "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "wall_clock_breakdown": false } ``` Causes an error at trainer.train() ``` Traceback (most recent call last): File "/home/augustus/ADAM/main2.py", line 82, in <module> trainer.train() File "/home/augustus/miniconda3/envs/adamTraining/lib/python3.10/site-packages/transformers/trainer.py", line 1645, in train return inner_training_loop( File "/home/augustus/miniconda3/envs/adamTraining/lib/python3.10/site-packages/transformers/trainer.py", line 1938, in _inner_training_loop tr_loss_step = self.training_step(model, inputs) File "/home/augustus/miniconda3/envs/adamTraining/lib/python3.10/site-packages/transformers/trainer.py", line 2759, in training_step loss = self.compute_loss(model, inputs) File "/home/augustus/miniconda3/envs/adamTraining/lib/python3.10/site-packages/transformers/trainer.py", line 2784, in compute_loss outputs = model(**inputs) File "/home/augustus/miniconda3/envs/adamTraining/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/augustus/miniconda3/envs/adamTraining/lib/python3.10/site-packages/transformers/models/gptj/modeling_gptj.py", line 854, in forward transformer_outputs = self.transformer( File "/home/augustus/miniconda3/envs/adamTraining/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/augustus/miniconda3/envs/adamTraining/lib/python3.10/site-packages/transformers/models/gptj/modeling_gptj.py", line 634, in forward inputs_embeds = self.wte(input_ids) File "/home/augustus/miniconda3/envs/adamTraining/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/augustus/miniconda3/envs/adamTraining/lib/python3.10/site-packages/torch/nn/modules/sparse.py", line 162, in forward return F.embedding( File "/home/augustus/miniconda3/envs/adamTraining/lib/python3.10/site-packages/torch/nn/functional.py", line 2210, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: 'weight' must be 2-D ``` ### Expected behavior I would expect training to begin or a more verbose error to help fix the issue (if possible to do so from my side)
[ 25 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0 ]
[ "solved" ]
https://api.github.com/repos/huggingface/transformers/issues/24954
TITLE Bump aiohttp from 3.8.1 to 3.8.5 in /examples/research_projects/decision_transformer COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY Bumps [aiohttp](https://github.com/aio-libs/aiohttp) from 3.8.1 to 3.8.5. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/aio-libs/aiohttp/releases">aiohttp's releases</a>.</em></p> <blockquote> <h2>3.8.5</h2> <h2>Security bugfixes</h2> <ul> <li> <p>Upgraded the vendored copy of llhttp_ to v8.1.1 -- by :user:<code>webknjaz</code> and :user:<code>Dreamsorcerer</code>.</p> <p>Thanks to :user:<code>sethmlarson</code> for reporting this and providing us with comprehensive reproducer, workarounds and fixing details! For more information, see <a href="https://github.com/aio-libs/aiohttp/security/advisories/GHSA-45c4-8wx5-qw6w">https://github.com/aio-libs/aiohttp/security/advisories/GHSA-45c4-8wx5-qw6w</a>.</p> <p>.. _llhttp: <a href="https://llhttp.org">https://llhttp.org</a></p> <p>(<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7346">#7346</a>)</p> </li> </ul> <h2>Features</h2> <ul> <li> <p>Added information to C parser exceptions to show which character caused the error. -- by :user:<code>Dreamsorcerer</code></p> <p>(<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7366">#7366</a>)</p> </li> </ul> <h2>Bugfixes</h2> <ul> <li> <p>Fixed a transport is :data:<code>None</code> error -- by :user:<code>Dreamsorcerer</code>.</p> <p>(<a href="https://redirect.github.com/aio-libs/aiohttp/issues/3355">#3355</a>)</p> </li> </ul> <hr /> <h2>3.8.4</h2> <h2>Bugfixes</h2> <ul> <li>Fixed incorrectly overwriting cookies with the same name and domain, but different path. (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/6638">#6638</a>)</li> <li>Fixed <code>ConnectionResetError</code> not being raised after client disconnection in SSL environments. (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7180">#7180</a>)</li> </ul> <hr /> <h2>3.8.3</h2> <p>.. attention::</p> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/aio-libs/aiohttp/blob/v3.8.5/CHANGES.rst">aiohttp's changelog</a>.</em></p> <blockquote> <h1>3.8.5 (2023-07-19)</h1> <h2>Security bugfixes</h2> <ul> <li> <p>Upgraded the vendored copy of llhttp_ to v8.1.1 -- by :user:<code>webknjaz</code> and :user:<code>Dreamsorcerer</code>.</p> <p>Thanks to :user:<code>sethmlarson</code> for reporting this and providing us with comprehensive reproducer, workarounds and fixing details! For more information, see <a href="https://github.com/aio-libs/aiohttp/security/advisories/GHSA-45c4-8wx5-qw6w">https://github.com/aio-libs/aiohttp/security/advisories/GHSA-45c4-8wx5-qw6w</a>.</p> <p>.. _llhttp: <a href="https://llhttp.org">https://llhttp.org</a></p> <p><code>[#7346](https://github.com/aio-libs/aiohttp/issues/7346) &lt;https://github.com/aio-libs/aiohttp/issues/7346&gt;</code>_</p> </li> </ul> <h2>Features</h2> <ul> <li> <p>Added information to C parser exceptions to show which character caused the error. -- by :user:<code>Dreamsorcerer</code></p> <p><code>[#7366](https://github.com/aio-libs/aiohttp/issues/7366) &lt;https://github.com/aio-libs/aiohttp/issues/7366&gt;</code>_</p> </li> </ul> <h2>Bugfixes</h2> <ul> <li> <p>Fixed a transport is :data:<code>None</code> error -- by :user:<code>Dreamsorcerer</code>.</p> <p><code>[#3355](https://github.com/aio-libs/aiohttp/issues/3355) &lt;https://github.com/aio-libs/aiohttp/issues/3355&gt;</code>_</p> </li> </ul> <hr /> <h1>3.8.4 (2023-02-12)</h1> <h2>Bugfixes</h2> <ul> <li>Fixed incorrectly overwriting cookies with the same name and domain, but different path. <code>[#6638](https://github.com/aio-libs/aiohttp/issues/6638) &lt;https://github.com/aio-libs/aiohttp/issues/6638&gt;</code>_</li> <li>Fixed <code>ConnectionResetError</code> not being raised after client disconnection in SSL environments. <code>[#7180](https://github.com/aio-libs/aiohttp/issues/7180) &lt;https://github.com/aio-libs/aiohttp/issues/7180&gt;</code>_</li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/aio-libs/aiohttp/commit/9c13a52c21c23dfdb49ed89418d28a5b116d0681"><code>9c13a52</code></a> Bump aiohttp to v3.8.5 a security release</li> <li><a href="https://github.com/aio-libs/aiohttp/commit/7c02129567bc4ec59be467b70fc937c82920948c"><code>7c02129</code></a>  Bump pypa/cibuildwheel to v2.14.1</li> <li><a href="https://github.com/aio-libs/aiohttp/commit/135a45e9d655d56e4ebad78abe84f1cb7b5c62dc"><code>135a45e</code></a> Improve error messages from C parser (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7366">#7366</a>) (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7380">#7380</a>)</li> <li><a href="https://github.com/aio-libs/aiohttp/commit/9337fb3f2ab2b5f38d7e98a194bde6f7e3d16c40"><code>9337fb3</code></a> Fix bump llhttp to v8.1.1 (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7367">#7367</a>) (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7377">#7377</a>)</li> <li><a href="https://github.com/aio-libs/aiohttp/commit/f07e9b44b5cb909054a697c8dd447b30dbf8073e"><code>f07e9b4</code></a> [PR <a href="https://redirect.github.com/aio-libs/aiohttp/issues/7373">#7373</a>/66e261a5 backport][3.8] Drop azure mention (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7374">#7374</a>)</li> <li><a href="https://github.com/aio-libs/aiohttp/commit/01d9b70e5477cd746561b52225992d8a2ebde953"><code>01d9b70</code></a> [PR <a href="https://redirect.github.com/aio-libs/aiohttp/issues/7370">#7370</a>/22c264ce backport][3.8] fix: Spelling error fixed (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7371">#7371</a>)</li> <li><a href="https://github.com/aio-libs/aiohttp/commit/3577b1e3719d4648fa973dbdec927f78f9df34dd"><code>3577b1e</code></a> [PR <a href="https://redirect.github.com/aio-libs/aiohttp/issues/7359">#7359</a>/7911f1e9 backport][3.8]  Set up secretless publishing to PyPI (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7360">#7360</a>)</li> <li><a href="https://github.com/aio-libs/aiohttp/commit/8d45f9c99511cd80140d6658bd9c11002c697f1c"><code>8d45f9c</code></a> [PR <a href="https://redirect.github.com/aio-libs/aiohttp/issues/7333">#7333</a>/3a54d378 backport][3.8] Fix TLS transport is <code>None</code> error (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7357">#7357</a>)</li> <li><a href="https://github.com/aio-libs/aiohttp/commit/dd8e24e77351df9c0f029be49d3c6d7862706e79"><code>dd8e24e</code></a> [PR <a href="https://redirect.github.com/aio-libs/aiohttp/issues/7343">#7343</a>/18057581 backport][3.8] Mention encoding in <code>yarl.URL</code> (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7355">#7355</a>)</li> <li><a href="https://github.com/aio-libs/aiohttp/commit/40874103ebfaa1007d47c25ecc4288af873a07cf"><code>4087410</code></a> [PR <a href="https://redirect.github.com/aio-libs/aiohttp/issues/7346">#7346</a>/346fd202 backport][3.8]  Bump vendored llhttp to v8.1.1 (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7352">#7352</a>)</li> <li>Additional commits viewable in <a href="https://github.com/aio-libs/aiohttp/compare/v3.8.1...v3.8.5">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=aiohttp&package-manager=pip&previous-version=3.8.1&new-version=3.8.5)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
[ 21 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ "dependencies" ]
https://api.github.com/repos/huggingface/transformers/issues/24044
TITLE Add keypoint-detection task COMMENTS 0 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 1 rocket: 0 eyes: 0 BODY ### Feature request Add support for keypoint detection. This includes a task, pipeline, dataset label and training pipeline. The task is to take an image and predict the x and y locations of a set of keypoints. Which keypoints are predicted should depend on the model trained for this task. The training pipeline for keypoint detection should allow to swap components. For example, one should be able to choose the backbone to be any suitable vision transformer model that is available on the huggingface hub. ### Motivation Keypoint detection is a use case that is prevalent in computer vision. The computer vision subset of the huggingface ecosystem would benefit from adding the popular keypoint detection task to the existing set of tasks. At the time of writing, existing repositories for keypoint detection often focus on a single particular model, e.g.: - yolov7: https://github.com/RizwanMunawar/yolov7-pose-estimation - yolov8: https://docs.ultralytics.com/tasks/pose/ - vitpose: https://github.com/ViTAE-Transformer/ViTPose The computer vision community could benefit greatly from a high quality community oriented open source hub for keypoint detection. ### Your contribution I am happy to be part of the discussion, but probably can do little in terms of PR's at this point in time.
[ 20 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ "New model" ]
https://api.github.com/repos/huggingface/transformers/issues/22294
TITLE Add perf_infer_gpu_one.mdx italian translation COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY See issue [https://github.com/huggingface/transformers/issues/17459] Add Italian translation of perf_infer_gpu_one.mdx and update _toctree.yml. It's my first pull request, so I hope it's ok
[ 8 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "WIP" ]
https://api.github.com/repos/huggingface/transformers/issues/22567
TITLE Unable to import VGG16 model transformers COMMENTS 2 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### Model description i have recently upload my trained vgg16 model to hugging face.After uploading i have a prompt of instructions to use my model. Although i have followed the prompt i got errors. [https://huggingface.co/Nvsai/DeviceClassification](url) >>> from transformers import VGG16 Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: cannot import name 'VGG16' from 'transformers' (/mnt/mydrive/ubantu/programming/openvino/lib/python3.9/site-packages/transformers/__init__.py) >>> >>> model = VGG16.fro ![Screenshot from 2023-04-04 20-45-24](https://user-images.githubusercontent.com/87435205/229841900-e12cee0f-69a1-4dd5-9332-2f65f177e8cf.png) m_pretrained("Nvsai/DeviceClassification") ![Screenshot from 2023-04-04 20-45-56](https://user-images.githubusercontent.com/87435205/229841929-812f7eb6-58e1-4919-aff6-35200aee426c.png) ### Open source status - [ ] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation _No response_
[ 20 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ "New model" ]
https://api.github.com/repos/huggingface/transformers/issues/24705
TITLE elif self.fsdp is not None and self.args.fsdp_config["xla"]: COMMENTS 8 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info - `transformers` version: 4.24.0 - Platform: Linux-5.4.143.bsk.8-amd64-x86_64-with-glibc2.28 - Python version: 3.10.9 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.12.1 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @sgugger ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction in trainer if self.args.fsdp_config["xla"] is true,trainer will wrap model layers,is this mean if i want to use fsdp to shard model,I must install torch-xla>2.0? ### Expected behavior want to know if i must install torch-xla for fspd trainning
[ 25 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0 ]
[ "solved" ]
https://api.github.com/repos/huggingface/transformers/issues/23664
TITLE Update all no_trainer with skip_first_batches COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY # What does this PR do? This PR updates all `no_trainer` examples to use `skip_first_batches` properly from the `Accelerator`/Accelerate when resuming from a checkpoint Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
[ 0 ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Examples" ]
https://api.github.com/repos/huggingface/transformers/issues/23459
TITLE ”never_split“ not working on BertTokenizer COMMENTS 30 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info transformers 4.28.1 python 3.8.13 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction - I load BertTokenizer using my own vocab.txt, and add _'[outline]'_ into _never_split_, which is included in my vocab.txt. However, _'[outline]'_ got splitted. Following is my code: `tokenizer = BertTokenizer.from_pretrained(pretrained_path,never_split=['[outline]']) input = "。[outline]" print(tokenizer.tokenize(input)) # ['。', '[', 'out', '##line', ']'] ` - I also do: `print(tokenizer.basic_tokenizer.tokenize(input)) #['。', '[', 'outline', ']']` ### Expected behavior When I do: `tokenizer.tokenize("。[outline]")` Get the result as `['。', '[outline]']`, the tokens in never_split don't be splited.
[ 22 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 ]
[ "Core: Tokenization" ]
https://api.github.com/repos/huggingface/transformers/issues/24028
TITLE 🚨🚨🚨 Replace DataLoader logic for Accelerate in Trainer, remove unneeded tests 🚨🚨🚨 COMMENTS 3 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY # What does this PR do? This PR: - Guts the internals for the `DataLoader` in all basic distributed fashions (replacing `pl.Loader` for TPU coming in a follow-up PR) to replace it with `accelerator.prepare` - Removes **two** tests that were deemed unnecessary - Test 1 removed: `tests/trainer/test_trainer.py::TrainerIntegrationTest::test_sampler_seed`, deemed to no longer be necessary to reset the seed, as Accelerate's dataloader setup doesn't need any extra help when iterating/loading back in the seed, regardless of the torch version - Test 2 removed: `tests/trainer/test_trainer.py::TrainerIntegrationTest::test_training_finite_iterable_dataset`, as with Accelerate's new sampler for `IterableDataset` we'll actually catch if it's `None` and raise an error, a new test will be made + clear error message on the `Accelerate` side, with a test added to `Trainer` afterwards. - Modifies two tests to use the proper attribute: Accelerator's `DataLoaders` all have `total_batch_size` rather than `batch_size` - `test_train_and_eval_dataloaders` and `test_data_is_not_parallelized_when_model_is_parallel` Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger @pacman100
[ 6, 26, 16, 24 ]
[ 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0 ]
[ "External", "Tests", "Distributed Training / Models", "trainer" ]
https://api.github.com/repos/huggingface/transformers/issues/22833
TITLE Update accelerate version + warning check fix COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY # What does this PR do? This PR bumps the accelerate version, and flips the logic for the warning to be accurate on the distributed mode check Fixes # (issue) Solves https://github.com/huggingface/transformers/issues/22816 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger @pacman100
[ 21, 16 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ "dependencies", "Distributed Training / Models" ]
https://api.github.com/repos/huggingface/transformers/issues/24214
TITLE Bump transformers from 3.5.1 to 4.30.0 in /examples/research_projects/bert-loses-patience COMMENTS 3 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY Bumps [transformers](https://github.com/huggingface/transformers) from 3.5.1 to 4.30.0. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/huggingface/transformers/releases">transformers's releases</a>.</em></p> <blockquote> <h2>v4.30.0: 100k, Agents improvements, Safetensors core dependency, Swiftformer, Autoformer, MobileViTv2, timm-as-a-backbone</h2> <h2>100k</h2> <p>Transformers has just reached 100k stars on GitHub, and to celebrate we wanted to highlight 100 projects in the vicinity of <code>transformers</code> and we have decided to create an <a href="https://github.com/huggingface/transformers/blob/main/awesome-transformers.md">awesome-transformers</a> page to do just that.</p> <p>We accept PRs to add projects to the list!</p> <ul> <li>Top 100 by <a href="https://github.com/LysandreJik"><code>@​LysandreJik</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/22912">#22912</a></li> <li>Add LlamaIndex to awesome-transformers.md by <a href="https://github.com/ravi03071991"><code>@​ravi03071991</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23484">#23484</a></li> <li>add cleanlab to awesome-transformers tools list by <a href="https://github.com/jwmueller"><code>@​jwmueller</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23440">#23440</a></li> </ul> <h2>4-bit quantization and QLoRA</h2> <p>By leveraging the <code>bitsandbytes</code> library by <a href="https://github.com/TimDettmers"><code>@​TimDettmers</code></a>, we add 4-bit support to <code>transformers</code> models!</p> <ul> <li>4-bit QLoRA via bitsandbytes (4-bit base model + LoRA) by <a href="https://github.com/TimDettmers"><code>@​TimDettmers</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23479">#23479</a></li> </ul> <h2>Agents</h2> <p>The Agents framework has been improved and continues to be stabilized. Among bug fixes, here are the important new features that were added:</p> <ul> <li>Local agent capabilities, to load a generative model directly from <code>transformers</code> instead of relying on APIs.</li> <li>Prompts are now hosted on the Hub, which means that anyone can fork the prompts and update them with theirs, to let other community contributors re-use them</li> <li>We add an <code>AzureOpenAiAgent</code> class to support Azure OpenAI agents.</li> </ul> <ul> <li>Add local agent by <a href="https://github.com/sgugger"><code>@​sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23438">#23438</a></li> <li>Enable prompts on the Hub by <a href="https://github.com/sgugger"><code>@​sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23662">#23662</a></li> <li>Add AzureOpenAiAgent by <a href="https://github.com/sgugger"><code>@​sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/24058">#24058</a></li> </ul> <h2>Safetensors</h2> <p>The <code>safetensors</code> library is a safe serialization framework for machine learning tensors. It has been audited and will become the default serialization framework for several organizations (Hugging Face, EleutherAI, Stability AI).</p> <p>It has now become a core dependency of <code>transformers</code>.</p> <ul> <li>Making <code>safetensors</code> a core dependency. by <a href="https://github.com/Narsil"><code>@​Narsil</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23254">#23254</a></li> </ul> <h2>New models</h2> <h3>Swiftformer</h3> <p>The SwiftFormer paper introduces a novel efficient additive attention mechanism that effectively replaces the quadratic matrix multiplication operations in the self-attention computation with linear element-wise multiplications. A series of models called ‘SwiftFormer’ is built based on this, which achieves state-of-the-art performance in terms of both accuracy and mobile inference speed. Even their small variant achieves 78.5% top-1 ImageNet1K accuracy with only 0.8 ms latency on iPhone 14, which is more accurate and 2× faster compared to MobileViT-v2.</p> <ul> <li>Add swiftformer by <a href="https://github.com/shehanmunasinghe"><code>@​shehanmunasinghe</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/22686">#22686</a></li> </ul> <h3>Autoformer</h3> <p>This model augments the Transformer as a deep decomposition architecture, which can progressively decompose the trend and seasonal components during the forecasting process.</p> <ul> <li>[Time-Series] Autoformer model by <a href="https://github.com/elisim"><code>@​elisim</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/21891">#21891</a></li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/huggingface/transformers/commit/fe861e578f50dc9c06de33cd361d2f625017e624"><code>fe861e5</code></a> [<code>GPT2</code>] Add correct keys on <code>_keys_to_ignore_on_load_unexpected</code> on all chil...</li> <li><a href="https://github.com/huggingface/transformers/commit/b3e27a80578d022301611363b890107244e12354"><code>b3e27a8</code></a> Update the pin on Accelerate (<a href="https://redirect.github.com/huggingface/transformers/issues/24110">#24110</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/53e1f5cf66d320b9c809f3940c707b6fef435d2d"><code>53e1f5c</code></a> [<code>Trainer</code>] Correct behavior of <code>_load_best_model</code> for PEFT models (<a href="https://redirect.github.com/huggingface/transformers/issues/24103">#24103</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/17db177714b03103bb94cd71b7dd414bc63bffd5"><code>17db177</code></a> reset accelerate env variables after each test (<a href="https://redirect.github.com/huggingface/transformers/issues/24107">#24107</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/905892f09027cab690918c7766fea1bb51bcdd26"><code>905892f</code></a> Release: v4.30.0</li> <li><a href="https://github.com/huggingface/transformers/commit/c3572e6bfba13ce6dc3fedb05cd1a946ea109576"><code>c3572e6</code></a> Add AzureOpenAiAgent (<a href="https://redirect.github.com/huggingface/transformers/issues/24058">#24058</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/5eb3d3c7023ed0522d3c743ee2e13d896a3aa788"><code>5eb3d3c</code></a> Up pinned accelerate version (<a href="https://redirect.github.com/huggingface/transformers/issues/24089">#24089</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/d1c039e39864a41f6eb8b770a65f123c40164ea5"><code>d1c039e</code></a> fix accelerator prepare during eval only mode (<a href="https://redirect.github.com/huggingface/transformers/issues/24014">#24014</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/2c887cf8e0cb1ac96d28361ff3235a77f83c36ee"><code>2c887cf</code></a> Do not prepare lr scheduler as it as the right number of steps (<a href="https://redirect.github.com/huggingface/transformers/issues/24088">#24088</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/12298cb65c7e9d615b749dde935a0b4966f4ae49"><code>12298cb</code></a> fix executable batch size issue (<a href="https://redirect.github.com/huggingface/transformers/issues/24067">#24067</a>)</li> <li>Additional commits viewable in <a href="https://github.com/huggingface/transformers/compare/v3.5.1...v4.30.0">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=transformers&package-manager=pip&previous-version=3.5.1&new-version=4.30.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
[ 21 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ "dependencies" ]
https://api.github.com/repos/huggingface/transformers/issues/23979
TITLE Handle `g_state` in RWKV's customized CUDA kernel to overcome sequence length limitation COMMENTS 1 REACTIONS +1: 1 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 1 BODY ### Feature request Handle `g_state` in RWKV's customized CUDA kernel enables backward pass with a chained forward. As such, the maximum `context_length` will not hinder longer sequences in training, and the behavior of WKV backward is coherent with forward. For BF16 kernels, see [here](https://github.com/Blealtan/RWKV-LM-LoRA/tree/dev-infctx/RWKV-v4neo/cuda). Credits to icecuber on RWKV Discord channel (searching for `chunked GPT mode` in the history will show the original code). ### Motivation The current implementation of RWKV dedicates to a `max_seq_length`, propagating the sequence length parameter down to the CUDA kernel. It can be problematic with longer input sequences. By supporting `g_state` backward, we can fix the maximum sequence length inside CUDA kernel and instead call it several times until the complete sequence gets processed. Also, given the forward pass already supports state chaining, the backward should also support this. > Some not so related advertising: > In [my recent experiments](https://github.com/Blealtan/RWKV-LM-LoRA/tree/dev-infctx), I'm building upon the state chaining functionality (or chunked GPT mode, per icecuber's wording) to achieve near-constant VRAM training with arbitrary sequence length. The basic idea is to do forward pass of the entire model once a piece and perform checkpointing for each piece, so that at the cost of the forward pass repeated twice we get any long sequence trained within fixed VRAM. If `g_state` is supported in `transformers`, it will be easy to port that here. ### Your contribution I can help by submitting the PR, but only later. I'm not locking that in case anyone has the time earlier than me.
[ 19 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Feature request" ]
https://api.github.com/repos/huggingface/transformers/issues/23889
TITLE Behaviour between slow and fast LLaMa tokenizer not equivalent COMMENTS 11 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info Transformers v4.29.2 ### Who can help? @ArthurZucker ### Reproduction For a new model (#23460), I'd like to get equivalent behaviour between the slow and fast LLaMa tokenizers. The code of the slow tokenizer was taken from the [original code](https://github.com/salesforce/LAVIS/blob/59273f651b9bffb193d1b12a235e909e9f826dda/lavis/models/blip2_models/blip2_vicuna_instruct.py#L82-L89), and now I'd like to translate this to the fast tokenizer as well. However, as can be seen below, behaviour is not equivalent: ``` from transformers import LlamaTokenizer, LlamaTokenizerFast import torch tokenizer = LlamaTokenizer.from_pretrained("huggyllama/llama-7b", truncation_side="left") tokenizer.add_special_tokens({"pad_token": "[PAD]"}) tokenizer.add_special_tokens({"bos_token": "</s>"}) tokenizer.add_special_tokens({"eos_token": "</s>"}) tokenizer.add_special_tokens({"unk_token": "</s>"}) fast_tokenizer = LlamaTokenizerFast.from_pretrained("huggyllama/llama-7b", truncation_side="left") fast_tokenizer.add_special_tokens({"pad_token": "[PAD]"}) fast_tokenizer.add_special_tokens({"bos_token": "</s>"}) fast_tokenizer.add_special_tokens({"eos_token": "</s>"}) fast_tokenizer.add_special_tokens({"unk_token": "</s>"}) prompt = "What is unusual about this image?" encoding = tokenizer(prompt, return_tensors="pt") fast_encoding = fast_tokenizer(prompt, return_tensors="pt") for k,v in encoding.items(): assert torch.allclose(fast_encoding[k], v) => this assertion fails since the input_ids differ: tensor([[ 2, 1724, 338, 22910, 1048, 445, 1967, 29973]]) tensor([[ 1, 1724, 338, 22910, 1048, 445, 1967, 29973]]) ``` ### Expected behavior I'd expect that the assertion above passes.
[ 22 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 ]
[ "Core: Tokenization" ]
https://api.github.com/repos/huggingface/transformers/issues/25097
TITLE Bump certifi from 2022.12.7 to 2023.7.22 in /examples/research_projects/visual_bert COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY [//]: # (dependabot-start) ⚠️ **Dependabot is rebasing this PR** ⚠️ Rebasing might not happen immediately, so don't worry if this takes some time. Note: if you make any changes to this PR yourself, they will take precedence over the rebase. --- [//]: # (dependabot-end) Bumps [certifi](https://github.com/certifi/python-certifi) from 2022.12.7 to 2023.7.22. <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/certifi/python-certifi/commit/8fb96ed81f71e7097ed11bc4d9b19afd7ea5c909"><code>8fb96ed</code></a> 2023.07.22</li> <li><a href="https://github.com/certifi/python-certifi/commit/afe77220e0eaa722593fc5d294213ff5275d1b40"><code>afe7722</code></a> Bump actions/setup-python from 4.6.1 to 4.7.0 (<a href="https://redirect.github.com/certifi/python-certifi/issues/230">#230</a>)</li> <li><a href="https://github.com/certifi/python-certifi/commit/2038739ad56abec7aaddfa90ad2ce6b3ed7f5c7b"><code>2038739</code></a> Bump dessant/lock-threads from 3.0.0 to 4.0.1 (<a href="https://redirect.github.com/certifi/python-certifi/issues/229">#229</a>)</li> <li><a href="https://github.com/certifi/python-certifi/commit/44df761f4c09d19f32b3cc09208a739043a5e25b"><code>44df761</code></a> Hash pin Actions and enable dependabot (<a href="https://redirect.github.com/certifi/python-certifi/issues/228">#228</a>)</li> <li><a href="https://github.com/certifi/python-certifi/commit/8b3d7bae85bbc87c9181cc1d39548db3d31627f0"><code>8b3d7ba</code></a> 2023.05.07</li> <li><a href="https://github.com/certifi/python-certifi/commit/53da2405b1af430f6bafa21ba45d8dd8dfc726b8"><code>53da240</code></a> ci: Add Python 3.12-dev to the testing (<a href="https://redirect.github.com/certifi/python-certifi/issues/224">#224</a>)</li> <li><a href="https://github.com/certifi/python-certifi/commit/c2fc3b1f64d6946f1057971ee897ea828ae848d8"><code>c2fc3b1</code></a> Create a Security Policy (<a href="https://redirect.github.com/certifi/python-certifi/issues/222">#222</a>)</li> <li><a href="https://github.com/certifi/python-certifi/commit/c211ef482a01aff5f1bc92c4128bfa0c955f4a01"><code>c211ef4</code></a> Set up permissions to github workflows (<a href="https://redirect.github.com/certifi/python-certifi/issues/218">#218</a>)</li> <li><a href="https://github.com/certifi/python-certifi/commit/2087de5d0aa1d472145fc1dbdfece3fe652bbac5"><code>2087de5</code></a> Don't let deprecation warning fail CI (<a href="https://redirect.github.com/certifi/python-certifi/issues/219">#219</a>)</li> <li><a href="https://github.com/certifi/python-certifi/commit/e0b9fc5c8f52ac8c300da502e5760ce3d41429ec"><code>e0b9fc5</code></a> remove paragraphs about 1024-bit roots from README</li> <li>Additional commits viewable in <a href="https://github.com/certifi/python-certifi/compare/2022.12.07...2023.07.22">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=certifi&package-manager=pip&previous-version=2022.12.7&new-version=2023.7.22)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
[ 21 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ "dependencies" ]
https://api.github.com/repos/huggingface/transformers/issues/25040
TITLE Add ViTMatte model COMMENTS 0 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### Model description ViTMatte is a recently released model for alpha matting on images i.e. background removal. The model accepts an input image and trimap (manually labelled grayscale image outlining the rough border of the foreground object) and predicts the alpha mate for each pixel. It introduces a series of small adaptations to the ViT architecture - selective global attention + window attention; adding convolutional blocks between transformers blocks - to reduce computational complexity and enhancing the high-frequency information passed through the network. At the time of publishing, ViTMatte showed SOTA performance on Distinctions-646 and strong performance (> Mattformer) on Composition-1K. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation Github: https://github.com/hustvl/ViTMatte Paper: https://arxiv.org/pdf/2305.15272.pdf Demo: https://colab.research.google.com/drive/1Dc2qoJueNZQyrTU19sIcrPyRDmvuMTF3?usp=sharing
[ 20, 14 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ "New model", "Vision" ]
https://api.github.com/repos/huggingface/transformers/issues/23640
TITLE Use python generator instead of streamer for generation COMMENTS 5 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 2 eyes: 0 BODY ### Feature request Add an option for receiving tokens (or similar) as they are generated via a [python generator](https://wiki.python.org/moin/Generators) as an alternative to needing a streamer object. ### Motivation There is a new feature [streamers](https://huggingface.co/docs/transformers/generation_strategies#streaming) for accessing the tokens being generated during generation. Usage of this object requires you to run some code in parallel while the model.generate function blocks it's current thread. You need to instead have your processing code defined like a callback within the streamer object you are using. A much simpler interface that solves this same problem is to yield the token sequences as they are generated with a [python generator](https://wiki.python.org/moin/Generators). Below is example usage for either case... ## Proposed Generator Implementation ``` for token in model.generate(**inputs, max_new_tokens=20, yield_tokens=True): print(f"The next token is {token}") ``` ## Current Streamer Implementation from transformers import AutoModelForCausalLM, TextStreamer ``` class MyStreamer: def __init__(self): pass def put(self, token): print(f"The next token is {token}") def end(): pass _ = model.generate(**inputs, streamer=streamer, max_new_tokens=20) ``` Not only does the generator implementation save on lines of code/simplify syntax, but python generators return iterables which has the benefit of making it easy to use all sorts of existing python tools without modification. For example, you can ### Enumerate ``` for idx, token in enumerate(model.generate(**inputs, max_new_tokens=20, yield_tokens=True)): print(f"The {idx}'th token is {token}") ``` ### Progress bar with TQDM Progress bar appears in CLI or jupyter notebook, updating in real time ``` for token in tqdm(model.generate(**inputs, max_new_tokens=20, yield_tokens=True)): my_endpoint.post(token) ``` And there's many many more tools that would easily integrate! In this case I proposed tokens because it's easier to think about that way, and it matches the current streamer implementation, but it may be easier to implement yielding a list of lists of tokens, since for beam search and similar multiple beams (multiple sequences) are being considered at any given time. This would enable more features on the developer side, esp in the case where you may want to generate multiple sequences in one call. But this is more of a sidenote and either of this or the base implementation would be really awesome. ### Your contribution I'm not planning to put in a PR anytime soon, but I did have a look through the code before finding the new streamer WIP feature. It seems like it would be fairly easy to implement a version of what I am describing. You just need to add a flag to optionally ``` yield new_token ``` inside each of beam_search, beam_sample, greedy_search, etc- and then update the model.generate wrapper to also optionally yield the results from each of these. In this case I proposed tokens because it's easier to think about that way, and it matches the current streamer implementation, but it may be easier to implement yielding a list of lists of tokens, since for beam search and such multiple beams are being considered at any given time.
[ 8 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "WIP" ]