text
stringlengths
23
371k
source
stringlengths
32
152
!--⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Community This page regroups resources around 🤗 Transformers developed by the community. ## Community resources: | Resource | Description | Author | |:----------|:-------------|------:| | [Hugging Face Transformers Glossary Flashcards](https://www.darigovresearch.com/huggingface-transformers-glossary-flashcards) | A set of flashcards based on the [Transformers Docs Glossary](glossary) that has been put into a form which can be easily learned/revised using [Anki ](https://apps.ankiweb.net/) an open source, cross platform app specifically designed for long term knowledge retention. See this [Introductory video on how to use the flashcards](https://www.youtube.com/watch?v=Dji_h7PILrw). | [Darigov Research](https://www.darigovresearch.com/) | ## Community notebooks: | Notebook | Description | Author | | |:----------|:-------------|:-------------|------:| | [Fine-tune a pre-trained Transformer to generate lyrics](https://github.com/AlekseyKorshuk/huggingartists) | How to generate lyrics in the style of your favorite artist by fine-tuning a GPT-2 model | [Aleksey Korshuk](https://github.com/AlekseyKorshuk) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb) | | [Train T5 in Tensorflow 2 ](https://github.com/snapthat/TF-T5-text-to-text) | How to train T5 for any task using Tensorflow 2. This notebook demonstrates a Question & Answer task implemented in Tensorflow 2 using SQUAD | [Muhammad Harris](https://github.com/HarrisDePerceptron) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/snapthat/TF-T5-text-to-text/blob/master/snapthatT5/notebooks/TF-T5-Datasets%20Training.ipynb) | | [Train T5 on TPU](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) | How to train T5 on SQUAD with Transformers and Nlp | [Suraj Patil](https://github.com/patil-suraj) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb#scrollTo=QLGiFCDqvuil) | | [Fine-tune T5 for Classification and Multiple Choice](https://github.com/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb) | How to fine-tune T5 for classification and multiple choice tasks using a text-to-text format with PyTorch Lightning | [Suraj Patil](https://github.com/patil-suraj) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb) | | [Fine-tune DialoGPT on New Datasets and Languages](https://github.com/ncoop57/i-am-a-nerd/blob/master/_notebooks/2020-05-12-chatbot-part-1.ipynb) | How to fine-tune the DialoGPT model on a new dataset for open-dialog conversational chatbots | [Nathan Cooper](https://github.com/ncoop57) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ncoop57/i-am-a-nerd/blob/master/_notebooks/2020-05-12-chatbot-part-1.ipynb) | | [Long Sequence Modeling with Reformer](https://github.com/patrickvonplaten/notebooks/blob/master/PyTorch_Reformer.ipynb) | How to train on sequences as long as 500,000 tokens with Reformer | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/PyTorch_Reformer.ipynb) | | [Fine-tune BART for Summarization](https://github.com/ohmeow/ohmeow_website/blob/master/posts/2021-05-25-mbart-sequence-classification-with-blurr.ipynb) | How to fine-tune BART for summarization with fastai using blurr | [Wayde Gilliam](https://ohmeow.com/) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ohmeow/ohmeow_website/blob/master/posts/2021-05-25-mbart-sequence-classification-with-blurr.ipynb) | | [Fine-tune a pre-trained Transformer on anyone's tweets](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb) | How to generate tweets in the style of your favorite Twitter account by fine-tuning a GPT-2 model | [Boris Dayma](https://github.com/borisdayma) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb) | | [Optimize 🤗 Hugging Face models with Weights & Biases](https://colab.research.google.com/github/wandb/examples/blob/master/colabs/huggingface/Optimize_Hugging_Face_models_with_Weights_%26_Biases.ipynb) | A complete tutorial showcasing W&B integration with Hugging Face | [Boris Dayma](https://github.com/borisdayma) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/wandb/examples/blob/master/colabs/huggingface/Optimize_Hugging_Face_models_with_Weights_%26_Biases.ipynb) | | [Pretrain Longformer](https://github.com/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb) | How to build a "long" version of existing pretrained models | [Iz Beltagy](https://beltagy.net) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb) | | [Fine-tune Longformer for QA](https://github.com/patil-suraj/Notebooks/blob/master/longformer_qa_training.ipynb) | How to fine-tune longformer model for QA task | [Suraj Patil](https://github.com/patil-suraj) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patil-suraj/Notebooks/blob/master/longformer_qa_training.ipynb) | | [Evaluate Model with 🤗nlp](https://github.com/patrickvonplaten/notebooks/blob/master/How_to_evaluate_Longformer_on_TriviaQA_using_NLP.ipynb) | How to evaluate longformer on TriviaQA with `nlp` | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1m7eTGlPmLRgoPkkA7rkhQdZ9ydpmsdLE?usp=sharing) | | [Fine-tune T5 for Sentiment Span Extraction](https://github.com/enzoampil/t5-intro/blob/master/t5_qa_training_pytorch_span_extraction.ipynb) | How to fine-tune T5 for sentiment span extraction using a text-to-text format with PyTorch Lightning | [Lorenzo Ampil](https://github.com/enzoampil) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/enzoampil/t5-intro/blob/master/t5_qa_training_pytorch_span_extraction.ipynb) | | [Fine-tune DistilBert for Multiclass Classification](https://github.com/abhimishra91/transformers-tutorials/blob/master/transformers_multiclass_classification.ipynb) | How to fine-tune DistilBert for multiclass classification with PyTorch | [Abhishek Kumar Mishra](https://github.com/abhimishra91) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/abhimishra91/transformers-tutorials/blob/master/transformers_multiclass_classification.ipynb)| |[Fine-tune BERT for Multi-label Classification](https://github.com/abhimishra91/transformers-tutorials/blob/master/transformers_multi_label_classification.ipynb)|How to fine-tune BERT for multi-label classification using PyTorch|[Abhishek Kumar Mishra](https://github.com/abhimishra91) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/abhimishra91/transformers-tutorials/blob/master/transformers_multi_label_classification.ipynb)| |[Fine-tune T5 for Summarization](https://github.com/abhimishra91/transformers-tutorials/blob/master/transformers_summarization_wandb.ipynb)|How to fine-tune T5 for summarization in PyTorch and track experiments with WandB|[Abhishek Kumar Mishra](https://github.com/abhimishra91) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/abhimishra91/transformers-tutorials/blob/master/transformers_summarization_wandb.ipynb)| |[Speed up Fine-Tuning in Transformers with Dynamic Padding / Bucketing](https://github.com/ELS-RD/transformers-notebook/blob/master/Divide_Hugging_Face_Transformers_training_time_by_2_or_more.ipynb)|How to speed up fine-tuning by a factor of 2 using dynamic padding / bucketing|[Michael Benesty](https://github.com/pommedeterresautee) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1CBfRU1zbfu7-ijiOqAAQUA-RJaxfcJoO?usp=sharing)| |[Pretrain Reformer for Masked Language Modeling](https://github.com/patrickvonplaten/notebooks/blob/master/Reformer_For_Masked_LM.ipynb)| How to train a Reformer model with bi-directional self-attention layers | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1tzzh0i8PgDQGV3SMFUGxM7_gGae3K-uW?usp=sharing)| |[Expand and Fine Tune Sci-BERT](https://github.com/lordtt13/word-embeddings/blob/master/COVID-19%20Research%20Data/COVID-SciBERT.ipynb)| How to increase vocabulary of a pretrained SciBERT model from AllenAI on the CORD dataset and pipeline it. | [Tanmay Thakur](https://github.com/lordtt13) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1rqAR40goxbAfez1xvF3hBJphSCsvXmh8)| |[Fine Tune BlenderBotSmall for Summarization using the Trainer API](https://github.com/lordtt13/transformers-experiments/blob/master/Custom%20Tasks/fine-tune-blenderbot_small-for-summarization.ipynb)| How to fine-tune BlenderBotSmall for summarization on a custom dataset, using the Trainer API. | [Tanmay Thakur](https://github.com/lordtt13) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/19Wmupuls7mykSGyRN_Qo6lPQhgp56ymq?usp=sharing)| |[Fine-tune Electra and interpret with Integrated Gradients](https://github.com/elsanns/xai-nlp-notebooks/blob/master/electra_fine_tune_interpret_captum_ig.ipynb) | How to fine-tune Electra for sentiment analysis and interpret predictions with Captum Integrated Gradients | [Eliza Szczechla](https://elsanns.github.io) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/elsanns/xai-nlp-notebooks/blob/master/electra_fine_tune_interpret_captum_ig.ipynb)| |[fine-tune a non-English GPT-2 Model with Trainer class](https://github.com/philschmid/fine-tune-GPT-2/blob/master/Fine_tune_a_non_English_GPT_2_Model_with_Huggingface.ipynb) | How to fine-tune a non-English GPT-2 Model with Trainer class | [Philipp Schmid](https://www.philschmid.de) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/philschmid/fine-tune-GPT-2/blob/master/Fine_tune_a_non_English_GPT_2_Model_with_Huggingface.ipynb)| |[Fine-tune a DistilBERT Model for Multi Label Classification task](https://github.com/DhavalTaunk08/Transformers_scripts/blob/master/Transformers_multilabel_distilbert.ipynb) | How to fine-tune a DistilBERT Model for Multi Label Classification task | [Dhaval Taunk](https://github.com/DhavalTaunk08) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/DhavalTaunk08/Transformers_scripts/blob/master/Transformers_multilabel_distilbert.ipynb)| |[Fine-tune ALBERT for sentence-pair classification](https://github.com/NadirEM/nlp-notebooks/blob/master/Fine_tune_ALBERT_sentence_pair_classification.ipynb) | How to fine-tune an ALBERT model or another BERT-based model for the sentence-pair classification task | [Nadir El Manouzi](https://github.com/NadirEM) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NadirEM/nlp-notebooks/blob/master/Fine_tune_ALBERT_sentence_pair_classification.ipynb)| |[Fine-tune Roberta for sentiment analysis](https://github.com/DhavalTaunk08/NLP_scripts/blob/master/sentiment_analysis_using_roberta.ipynb) | How to fine-tune a Roberta model for sentiment analysis | [Dhaval Taunk](https://github.com/DhavalTaunk08) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/DhavalTaunk08/NLP_scripts/blob/master/sentiment_analysis_using_roberta.ipynb)| |[Evaluating Question Generation Models](https://github.com/flexudy-pipe/qugeev) | How accurate are the answers to questions generated by your seq2seq transformer model? | [Pascal Zoleko](https://github.com/zolekode) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1bpsSqCQU-iw_5nNoRm_crPq6FRuJthq_?usp=sharing)| |[Classify text with DistilBERT and Tensorflow](https://github.com/peterbayerle/huggingface_notebook/blob/main/distilbert_tf.ipynb) | How to fine-tune DistilBERT for text classification in TensorFlow | [Peter Bayerle](https://github.com/peterbayerle) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/peterbayerle/huggingface_notebook/blob/main/distilbert_tf.ipynb)| |[Leverage BERT for Encoder-Decoder Summarization on CNN/Dailymail](https://github.com/patrickvonplaten/notebooks/blob/master/BERT2BERT_for_CNN_Dailymail.ipynb) | How to warm-start a *EncoderDecoderModel* with a *bert-base-uncased* checkpoint for summarization on CNN/Dailymail | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/BERT2BERT_for_CNN_Dailymail.ipynb)| |[Leverage RoBERTa for Encoder-Decoder Summarization on BBC XSum](https://github.com/patrickvonplaten/notebooks/blob/master/RoBERTaShared_for_BBC_XSum.ipynb) | How to warm-start a shared *EncoderDecoderModel* with a *roberta-base* checkpoint for summarization on BBC/XSum | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/RoBERTaShared_for_BBC_XSum.ipynb)| |[Fine-tune TAPAS on Sequential Question Answering (SQA)](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/TAPAS/Fine_tuning_TapasForQuestionAnswering_on_SQA.ipynb) | How to fine-tune *TapasForQuestionAnswering* with a *tapas-base* checkpoint on the Sequential Question Answering (SQA) dataset | [Niels Rogge](https://github.com/nielsrogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/TAPAS/Fine_tuning_TapasForQuestionAnswering_on_SQA.ipynb)| |[Evaluate TAPAS on Table Fact Checking (TabFact)](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/TAPAS/Evaluating_TAPAS_on_the_Tabfact_test_set.ipynb) | How to evaluate a fine-tuned *TapasForSequenceClassification* with a *tapas-base-finetuned-tabfact* checkpoint using a combination of the 🤗 datasets and 🤗 transformers libraries | [Niels Rogge](https://github.com/nielsrogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/TAPAS/Evaluating_TAPAS_on_the_Tabfact_test_set.ipynb)| |[Fine-tuning mBART for translation](https://colab.research.google.com/github/vasudevgupta7/huggingface-tutorials/blob/main/translation_training.ipynb) | How to fine-tune mBART using Seq2SeqTrainer for Hindi to English translation | [Vasudev Gupta](https://github.com/vasudevgupta7) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/vasudevgupta7/huggingface-tutorials/blob/main/translation_training.ipynb)| |[Fine-tune LayoutLM on FUNSD (a form understanding dataset)](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForTokenClassification_on_FUNSD.ipynb) | How to fine-tune *LayoutLMForTokenClassification* on the FUNSD dataset for information extraction from scanned documents | [Niels Rogge](https://github.com/nielsrogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForTokenClassification_on_FUNSD.ipynb)| |[Fine-Tune DistilGPT2 and Generate Text](https://colab.research.google.com/github/tripathiaakash/DistilGPT2-Tutorial/blob/main/distilgpt2_fine_tuning.ipynb) | How to fine-tune DistilGPT2 and generate text | [Aakash Tripathi](https://github.com/tripathiaakash) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/tripathiaakash/DistilGPT2-Tutorial/blob/main/distilgpt2_fine_tuning.ipynb)| |[Fine-Tune LED on up to 8K tokens](https://github.com/patrickvonplaten/notebooks/blob/master/Fine_tune_Longformer_Encoder_Decoder_(LED)_for_Summarization_on_pubmed.ipynb) | How to fine-tune LED on pubmed for long-range summarization | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_tune_Longformer_Encoder_Decoder_(LED)_for_Summarization_on_pubmed.ipynb)| |[Evaluate LED on Arxiv](https://github.com/patrickvonplaten/notebooks/blob/master/LED_on_Arxiv.ipynb) | How to effectively evaluate LED on long-range summarization | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/LED_on_Arxiv.ipynb)| |[Fine-tune LayoutLM on RVL-CDIP (a document image classification dataset)](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForSequenceClassification_on_RVL_CDIP.ipynb) | How to fine-tune *LayoutLMForSequenceClassification* on the RVL-CDIP dataset for scanned document classification | [Niels Rogge](https://github.com/nielsrogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForSequenceClassification_on_RVL_CDIP.ipynb)| |[Wav2Vec2 CTC decoding with GPT2 adjustment](https://github.com/voidful/huggingface_notebook/blob/main/xlsr_gpt.ipynb) | How to decode CTC sequence with language model adjustment | [Eric Lam](https://github.com/voidful) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1e_z5jQHYbO2YKEaUgzb1ww1WwiAyydAj?usp=sharing)| |[Fine-tune BART for summarization in two languages with Trainer class](https://github.com/elsanns/xai-nlp-notebooks/blob/master/fine_tune_bart_summarization_two_langs.ipynb) | How to fine-tune BART for summarization in two languages with Trainer class | [Eliza Szczechla](https://github.com/elsanns) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/elsanns/xai-nlp-notebooks/blob/master/fine_tune_bart_summarization_two_langs.ipynb)| |[Evaluate Big Bird on Trivia QA](https://github.com/patrickvonplaten/notebooks/blob/master/Evaluating_Big_Bird_on_TriviaQA.ipynb) | How to evaluate BigBird on long document question answering on Trivia QA | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Evaluating_Big_Bird_on_TriviaQA.ipynb)| | [Create video captions using Wav2Vec2](https://github.com/Muennighoff/ytclipcc/blob/main/wav2vec_youtube_captions.ipynb) | How to create YouTube captions from any video by transcribing the audio with Wav2Vec | [Niklas Muennighoff](https://github.com/Muennighoff) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Muennighoff/ytclipcc/blob/main/wav2vec_youtube_captions.ipynb) | | [Fine-tune the Vision Transformer on CIFAR-10 using PyTorch Lightning](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_PyTorch_Lightning.ipynb) | How to fine-tune the Vision Transformer (ViT) on CIFAR-10 using HuggingFace Transformers, Datasets and PyTorch Lightning | [Niels Rogge](https://github.com/nielsrogge) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_PyTorch_Lightning.ipynb) | | [Fine-tune the Vision Transformer on CIFAR-10 using the 🤗 Trainer](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_the_%F0%9F%A4%97_Trainer.ipynb) | How to fine-tune the Vision Transformer (ViT) on CIFAR-10 using HuggingFace Transformers, Datasets and the 🤗 Trainer | [Niels Rogge](https://github.com/nielsrogge) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_the_%F0%9F%A4%97_Trainer.ipynb) | | [Evaluate LUKE on Open Entity, an entity typing dataset](https://github.com/studio-ousia/luke/blob/master/notebooks/huggingface_open_entity.ipynb) | How to evaluate *LukeForEntityClassification* on the Open Entity dataset | [Ikuya Yamada](https://github.com/ikuyamada) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/studio-ousia/luke/blob/master/notebooks/huggingface_open_entity.ipynb) | | [Evaluate LUKE on TACRED, a relation extraction dataset](https://github.com/studio-ousia/luke/blob/master/notebooks/huggingface_tacred.ipynb) | How to evaluate *LukeForEntityPairClassification* on the TACRED dataset | [Ikuya Yamada](https://github.com/ikuyamada) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/studio-ousia/luke/blob/master/notebooks/huggingface_tacred.ipynb) | | [Evaluate LUKE on CoNLL-2003, an important NER benchmark](https://github.com/studio-ousia/luke/blob/master/notebooks/huggingface_conll_2003.ipynb) | How to evaluate *LukeForEntitySpanClassification* on the CoNLL-2003 dataset | [Ikuya Yamada](https://github.com/ikuyamada) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/studio-ousia/luke/blob/master/notebooks/huggingface_conll_2003.ipynb) | | [Evaluate BigBird-Pegasus on PubMed dataset](https://github.com/vasudevgupta7/bigbird/blob/main/notebooks/bigbird_pegasus_evaluation.ipynb) | How to evaluate *BigBirdPegasusForConditionalGeneration* on PubMed dataset | [Vasudev Gupta](https://github.com/vasudevgupta7) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/vasudevgupta7/bigbird/blob/main/notebooks/bigbird_pegasus_evaluation.ipynb) | | [Speech Emotion Classification with Wav2Vec2](https://github/m3hrdadfi/soxan/blob/main/notebooks/Emotion_recognition_in_Greek_speech_using_Wav2Vec2.ipynb) | How to leverage a pretrained Wav2Vec2 model for Emotion Classification on the MEGA dataset | [Mehrdad Farahani](https://github.com/m3hrdadfi) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/m3hrdadfi/soxan/blob/main/notebooks/Emotion_recognition_in_Greek_speech_using_Wav2Vec2.ipynb) | | [Detect objects in an image with DETR](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/DETR/DETR_minimal_example_(with_DetrFeatureExtractor).ipynb) | How to use a trained *DetrForObjectDetection* model to detect objects in an image and visualize attention | [Niels Rogge](https://github.com/NielsRogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/DETR/DETR_minimal_example_(with_DetrFeatureExtractor).ipynb) | | [Fine-tune DETR on a custom object detection dataset](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/DETR/Fine_tuning_DetrForObjectDetection_on_custom_dataset_(balloon).ipynb) | How to fine-tune *DetrForObjectDetection* on a custom object detection dataset | [Niels Rogge](https://github.com/NielsRogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/DETR/Fine_tuning_DetrForObjectDetection_on_custom_dataset_(balloon).ipynb) | | [Finetune T5 for Named Entity Recognition](https://github.com/ToluClassics/Notebooks/blob/main/T5_Ner_Finetuning.ipynb) | How to fine-tune *T5* on a Named Entity Recognition Task | [Ogundepo Odunayo](https://github.com/ToluClassics) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1obr78FY_cBmWY5ODViCmzdY6O1KB65Vc?usp=sharing) |
huggingface/transformers/blob/main/docs/source/en/community.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # 🤗 Transformers State-of-the-art Machine Learning for [PyTorch](https://pytorch.org/), [TensorFlow](https://www.tensorflow.org/), and [JAX](https://jax.readthedocs.io/en/latest/). 🤗 Transformers provides APIs and tools to easily download and train state-of-the-art pretrained models. Using pretrained models can reduce your compute costs, carbon footprint, and save you the time and resources required to train a model from scratch. These models support common tasks in different modalities, such as: 📝 **Natural Language Processing**: text classification, named entity recognition, question answering, language modeling, summarization, translation, multiple choice, and text generation.<br> 🖼️ **Computer Vision**: image classification, object detection, and segmentation.<br> 🗣️ **Audio**: automatic speech recognition and audio classification.<br> 🐙 **Multimodal**: table question answering, optical character recognition, information extraction from scanned documents, video classification, and visual question answering. 🤗 Transformers support framework interoperability between PyTorch, TensorFlow, and JAX. This provides the flexibility to use a different framework at each stage of a model's life; train a model in three lines of code in one framework, and load it for inference in another. Models can also be exported to a format like ONNX and TorchScript for deployment in production environments. Join the growing community on the [Hub](https://huggingface.co/models), [forum](https://discuss.huggingface.co/), or [Discord](https://discord.com/invite/JfAtkvEtRb) today! ## If you are looking for custom support from the Hugging Face team <a target="_blank" href="https://huggingface.co/support"> <img alt="HuggingFace Expert Acceleration Program" src="https://cdn-media.huggingface.co/marketing/transformers/new-support-improved.png" style="width: 100%; max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);"> </a> ## Contents The documentation is organized into five sections: - **GET STARTED** provides a quick tour of the library and installation instructions to get up and running. - **TUTORIALS** are a great place to start if you're a beginner. This section will help you gain the basic skills you need to start using the library. - **HOW-TO GUIDES** show you how to achieve a specific goal, like finetuning a pretrained model for language modeling or how to write and share a custom model. - **CONCEPTUAL GUIDES** offers more discussion and explanation of the underlying concepts and ideas behind models, tasks, and the design philosophy of 🤗 Transformers. - **API** describes all classes and functions: - **MAIN CLASSES** details the most important classes like configuration, model, tokenizer, and pipeline. - **MODELS** details the classes and functions related to each model implemented in the library. - **INTERNAL HELPERS** details utility classes and functions used internally. ## Supported models and frameworks The table below represents the current support in the library for each of those models, whether they have a Python tokenizer (called "slow"). A "fast" tokenizer backed by the 🤗 Tokenizers library, whether they have support in Jax (via Flax), PyTorch, and/or TensorFlow. <!--This table is updated automatically from the auto modules with _make fix-copies_. Do not update manually!--> | Model | PyTorch support | TensorFlow support | Flax Support | |:------------------------------------------------------------------------:|:---------------:|:------------------:|:------------:| | [ALBERT](model_doc/albert) | ✅ | ✅ | ✅ | | [ALIGN](model_doc/align) | ✅ | ❌ | ❌ | | [AltCLIP](model_doc/altclip) | ✅ | ❌ | ❌ | | [Audio Spectrogram Transformer](model_doc/audio-spectrogram-transformer) | ✅ | ❌ | ❌ | | [Autoformer](model_doc/autoformer) | ✅ | ❌ | ❌ | | [Bark](model_doc/bark) | ✅ | ❌ | ❌ | | [BART](model_doc/bart) | ✅ | ✅ | ✅ | | [BARThez](model_doc/barthez) | ✅ | ✅ | ✅ | | [BARTpho](model_doc/bartpho) | ✅ | ✅ | ✅ | | [BEiT](model_doc/beit) | ✅ | ❌ | ✅ | | [BERT](model_doc/bert) | ✅ | ✅ | ✅ | | [Bert Generation](model_doc/bert-generation) | ✅ | ❌ | ❌ | | [BertJapanese](model_doc/bert-japanese) | ✅ | ✅ | ✅ | | [BERTweet](model_doc/bertweet) | ✅ | ✅ | ✅ | | [BigBird](model_doc/big_bird) | ✅ | ❌ | ✅ | | [BigBird-Pegasus](model_doc/bigbird_pegasus) | ✅ | ❌ | ❌ | | [BioGpt](model_doc/biogpt) | ✅ | ❌ | ❌ | | [BiT](model_doc/bit) | ✅ | ❌ | ❌ | | [Blenderbot](model_doc/blenderbot) | ✅ | ✅ | ✅ | | [BlenderbotSmall](model_doc/blenderbot-small) | ✅ | ✅ | ✅ | | [BLIP](model_doc/blip) | ✅ | ✅ | ❌ | | [BLIP-2](model_doc/blip-2) | ✅ | ❌ | ❌ | | [BLOOM](model_doc/bloom) | ✅ | ❌ | ✅ | | [BORT](model_doc/bort) | ✅ | ✅ | ✅ | | [BridgeTower](model_doc/bridgetower) | ✅ | ❌ | ❌ | | [BROS](model_doc/bros) | ✅ | ❌ | ❌ | | [ByT5](model_doc/byt5) | ✅ | ✅ | ✅ | | [CamemBERT](model_doc/camembert) | ✅ | ✅ | ❌ | | [CANINE](model_doc/canine) | ✅ | ❌ | ❌ | | [Chinese-CLIP](model_doc/chinese_clip) | ✅ | ❌ | ❌ | | [CLAP](model_doc/clap) | ✅ | ❌ | ❌ | | [CLIP](model_doc/clip) | ✅ | ✅ | ✅ | | [CLIPSeg](model_doc/clipseg) | ✅ | ❌ | ❌ | | [CLVP](model_doc/clvp) | ✅ | ❌ | ❌ | | [CodeGen](model_doc/codegen) | ✅ | ❌ | ❌ | | [CodeLlama](model_doc/code_llama) | ✅ | ❌ | ✅ | | [Conditional DETR](model_doc/conditional_detr) | ✅ | ❌ | ❌ | | [ConvBERT](model_doc/convbert) | ✅ | ✅ | ❌ | | [ConvNeXT](model_doc/convnext) | ✅ | ✅ | ❌ | | [ConvNeXTV2](model_doc/convnextv2) | ✅ | ✅ | ❌ | | [CPM](model_doc/cpm) | ✅ | ✅ | ✅ | | [CPM-Ant](model_doc/cpmant) | ✅ | ❌ | ❌ | | [CTRL](model_doc/ctrl) | ✅ | ✅ | ❌ | | [CvT](model_doc/cvt) | ✅ | ✅ | ❌ | | [Data2VecAudio](model_doc/data2vec) | ✅ | ❌ | ❌ | | [Data2VecText](model_doc/data2vec) | ✅ | ❌ | ❌ | | [Data2VecVision](model_doc/data2vec) | ✅ | ✅ | ❌ | | [DeBERTa](model_doc/deberta) | ✅ | ✅ | ❌ | | [DeBERTa-v2](model_doc/deberta-v2) | ✅ | ✅ | ❌ | | [Decision Transformer](model_doc/decision_transformer) | ✅ | ❌ | ❌ | | [Deformable DETR](model_doc/deformable_detr) | ✅ | ❌ | ❌ | | [DeiT](model_doc/deit) | ✅ | ✅ | ❌ | | [DePlot](model_doc/deplot) | ✅ | ❌ | ❌ | | [DETA](model_doc/deta) | ✅ | ❌ | ❌ | | [DETR](model_doc/detr) | ✅ | ❌ | ❌ | | [DialoGPT](model_doc/dialogpt) | ✅ | ✅ | ✅ | | [DiNAT](model_doc/dinat) | ✅ | ❌ | ❌ | | [DINOv2](model_doc/dinov2) | ✅ | ❌ | ❌ | | [DistilBERT](model_doc/distilbert) | ✅ | ✅ | ✅ | | [DiT](model_doc/dit) | ✅ | ❌ | ✅ | | [DonutSwin](model_doc/donut) | ✅ | ❌ | ❌ | | [DPR](model_doc/dpr) | ✅ | ✅ | ❌ | | [DPT](model_doc/dpt) | ✅ | ❌ | ❌ | | [EfficientFormer](model_doc/efficientformer) | ✅ | ✅ | ❌ | | [EfficientNet](model_doc/efficientnet) | ✅ | ❌ | ❌ | | [ELECTRA](model_doc/electra) | ✅ | ✅ | ✅ | | [EnCodec](model_doc/encodec) | ✅ | ❌ | ❌ | | [Encoder decoder](model_doc/encoder-decoder) | ✅ | ✅ | ✅ | | [ERNIE](model_doc/ernie) | ✅ | ❌ | ❌ | | [ErnieM](model_doc/ernie_m) | ✅ | ❌ | ❌ | | [ESM](model_doc/esm) | ✅ | ✅ | ❌ | | [FairSeq Machine-Translation](model_doc/fsmt) | ✅ | ❌ | ❌ | | [Falcon](model_doc/falcon) | ✅ | ❌ | ❌ | | [FLAN-T5](model_doc/flan-t5) | ✅ | ✅ | ✅ | | [FLAN-UL2](model_doc/flan-ul2) | ✅ | ✅ | ✅ | | [FlauBERT](model_doc/flaubert) | ✅ | ✅ | ❌ | | [FLAVA](model_doc/flava) | ✅ | ❌ | ❌ | | [FNet](model_doc/fnet) | ✅ | ❌ | ❌ | | [FocalNet](model_doc/focalnet) | ✅ | ❌ | ❌ | | [Funnel Transformer](model_doc/funnel) | ✅ | ✅ | ❌ | | [Fuyu](model_doc/fuyu) | ✅ | ❌ | ❌ | | [GIT](model_doc/git) | ✅ | ❌ | ❌ | | [GLPN](model_doc/glpn) | ✅ | ❌ | ❌ | | [GPT Neo](model_doc/gpt_neo) | ✅ | ❌ | ✅ | | [GPT NeoX](model_doc/gpt_neox) | ✅ | ❌ | ❌ | | [GPT NeoX Japanese](model_doc/gpt_neox_japanese) | ✅ | ❌ | ❌ | | [GPT-J](model_doc/gptj) | ✅ | ✅ | ✅ | | [GPT-Sw3](model_doc/gpt-sw3) | ✅ | ✅ | ✅ | | [GPTBigCode](model_doc/gpt_bigcode) | ✅ | ❌ | ❌ | | [GPTSAN-japanese](model_doc/gptsan-japanese) | ✅ | ❌ | ❌ | | [Graphormer](model_doc/graphormer) | ✅ | ❌ | ❌ | | [GroupViT](model_doc/groupvit) | ✅ | ✅ | ❌ | | [HerBERT](model_doc/herbert) | ✅ | ✅ | ✅ | | [Hubert](model_doc/hubert) | ✅ | ✅ | ❌ | | [I-BERT](model_doc/ibert) | ✅ | ❌ | ❌ | | [IDEFICS](model_doc/idefics) | ✅ | ❌ | ❌ | | [ImageGPT](model_doc/imagegpt) | ✅ | ❌ | ❌ | | [Informer](model_doc/informer) | ✅ | ❌ | ❌ | | [InstructBLIP](model_doc/instructblip) | ✅ | ❌ | ❌ | | [Jukebox](model_doc/jukebox) | ✅ | ❌ | ❌ | | [KOSMOS-2](model_doc/kosmos-2) | ✅ | ❌ | ❌ | | [LayoutLM](model_doc/layoutlm) | ✅ | ✅ | ❌ | | [LayoutLMv2](model_doc/layoutlmv2) | ✅ | ❌ | ❌ | | [LayoutLMv3](model_doc/layoutlmv3) | ✅ | ✅ | ❌ | | [LayoutXLM](model_doc/layoutxlm) | ✅ | ❌ | ❌ | | [LED](model_doc/led) | ✅ | ✅ | ❌ | | [LeViT](model_doc/levit) | ✅ | ❌ | ❌ | | [LiLT](model_doc/lilt) | ✅ | ❌ | ❌ | | [LLaMA](model_doc/llama) | ✅ | ❌ | ✅ | | [Llama2](model_doc/llama2) | ✅ | ❌ | ✅ | | [LLaVa](model_doc/llava) | ✅ | ❌ | ❌ | | [Longformer](model_doc/longformer) | ✅ | ✅ | ❌ | | [LongT5](model_doc/longt5) | ✅ | ❌ | ✅ | | [LUKE](model_doc/luke) | ✅ | ❌ | ❌ | | [LXMERT](model_doc/lxmert) | ✅ | ✅ | ❌ | | [M-CTC-T](model_doc/mctct) | ✅ | ❌ | ❌ | | [M2M100](model_doc/m2m_100) | ✅ | ❌ | ❌ | | [MADLAD-400](model_doc/madlad-400) | ✅ | ✅ | ✅ | | [Marian](model_doc/marian) | ✅ | ✅ | ✅ | | [MarkupLM](model_doc/markuplm) | ✅ | ❌ | ❌ | | [Mask2Former](model_doc/mask2former) | ✅ | ❌ | ❌ | | [MaskFormer](model_doc/maskformer) | ✅ | ❌ | ❌ | | [MatCha](model_doc/matcha) | ✅ | ❌ | ❌ | | [mBART](model_doc/mbart) | ✅ | ✅ | ✅ | | [mBART-50](model_doc/mbart50) | ✅ | ✅ | ✅ | | [MEGA](model_doc/mega) | ✅ | ❌ | ❌ | | [Megatron-BERT](model_doc/megatron-bert) | ✅ | ❌ | ❌ | | [Megatron-GPT2](model_doc/megatron_gpt2) | ✅ | ✅ | ✅ | | [MGP-STR](model_doc/mgp-str) | ✅ | ❌ | ❌ | | [Mistral](model_doc/mistral) | ✅ | ❌ | ❌ | | [Mixtral](model_doc/mixtral) | ✅ | ❌ | ❌ | | [mLUKE](model_doc/mluke) | ✅ | ❌ | ❌ | | [MMS](model_doc/mms) | ✅ | ✅ | ✅ | | [MobileBERT](model_doc/mobilebert) | ✅ | ✅ | ❌ | | [MobileNetV1](model_doc/mobilenet_v1) | ✅ | ❌ | ❌ | | [MobileNetV2](model_doc/mobilenet_v2) | ✅ | ❌ | ❌ | | [MobileViT](model_doc/mobilevit) | ✅ | ✅ | ❌ | | [MobileViTV2](model_doc/mobilevitv2) | ✅ | ❌ | ❌ | | [MPNet](model_doc/mpnet) | ✅ | ✅ | ❌ | | [MPT](model_doc/mpt) | ✅ | ❌ | ❌ | | [MRA](model_doc/mra) | ✅ | ❌ | ❌ | | [MT5](model_doc/mt5) | ✅ | ✅ | ✅ | | [MusicGen](model_doc/musicgen) | ✅ | ❌ | ❌ | | [MVP](model_doc/mvp) | ✅ | ❌ | ❌ | | [NAT](model_doc/nat) | ✅ | ❌ | ❌ | | [Nezha](model_doc/nezha) | ✅ | ❌ | ❌ | | [NLLB](model_doc/nllb) | ✅ | ❌ | ❌ | | [NLLB-MOE](model_doc/nllb-moe) | ✅ | ❌ | ❌ | | [Nougat](model_doc/nougat) | ✅ | ✅ | ✅ | | [Nyströmformer](model_doc/nystromformer) | ✅ | ❌ | ❌ | | [OneFormer](model_doc/oneformer) | ✅ | ❌ | ❌ | | [OpenAI GPT](model_doc/openai-gpt) | ✅ | ✅ | ❌ | | [OpenAI GPT-2](model_doc/gpt2) | ✅ | ✅ | ✅ | | [OpenLlama](model_doc/open-llama) | ✅ | ❌ | ❌ | | [OPT](model_doc/opt) | ✅ | ✅ | ✅ | | [OWL-ViT](model_doc/owlvit) | ✅ | ❌ | ❌ | | [OWLv2](model_doc/owlv2) | ✅ | ❌ | ❌ | | [PatchTSMixer](model_doc/patchtsmixer) | ✅ | ❌ | ❌ | | [PatchTST](model_doc/patchtst) | ✅ | ❌ | ❌ | | [Pegasus](model_doc/pegasus) | ✅ | ✅ | ✅ | | [PEGASUS-X](model_doc/pegasus_x) | ✅ | ❌ | ❌ | | [Perceiver](model_doc/perceiver) | ✅ | ❌ | ❌ | | [Persimmon](model_doc/persimmon) | ✅ | ❌ | ❌ | | [Phi](model_doc/phi) | ✅ | ❌ | ❌ | | [PhoBERT](model_doc/phobert) | ✅ | ✅ | ✅ | | [Pix2Struct](model_doc/pix2struct) | ✅ | ❌ | ❌ | | [PLBart](model_doc/plbart) | ✅ | ❌ | ❌ | | [PoolFormer](model_doc/poolformer) | ✅ | ❌ | ❌ | | [Pop2Piano](model_doc/pop2piano) | ✅ | ❌ | ❌ | | [ProphetNet](model_doc/prophetnet) | ✅ | ❌ | ❌ | | [PVT](model_doc/pvt) | ✅ | ❌ | ❌ | | [QDQBert](model_doc/qdqbert) | ✅ | ❌ | ❌ | | [RAG](model_doc/rag) | ✅ | ✅ | ❌ | | [REALM](model_doc/realm) | ✅ | ❌ | ❌ | | [Reformer](model_doc/reformer) | ✅ | ❌ | ❌ | | [RegNet](model_doc/regnet) | ✅ | ✅ | ✅ | | [RemBERT](model_doc/rembert) | ✅ | ✅ | ❌ | | [ResNet](model_doc/resnet) | ✅ | ✅ | ✅ | | [RetriBERT](model_doc/retribert) | ✅ | ❌ | ❌ | | [RoBERTa](model_doc/roberta) | ✅ | ✅ | ✅ | | [RoBERTa-PreLayerNorm](model_doc/roberta-prelayernorm) | ✅ | ✅ | ✅ | | [RoCBert](model_doc/roc_bert) | ✅ | ❌ | ❌ | | [RoFormer](model_doc/roformer) | ✅ | ✅ | ✅ | | [RWKV](model_doc/rwkv) | ✅ | ❌ | ❌ | | [SAM](model_doc/sam) | ✅ | ✅ | ❌ | | [SeamlessM4T](model_doc/seamless_m4t) | ✅ | ❌ | ❌ | | [SeamlessM4Tv2](model_doc/seamless_m4t_v2) | ✅ | ❌ | ❌ | | [SegFormer](model_doc/segformer) | ✅ | ✅ | ❌ | | [SEW](model_doc/sew) | ✅ | ❌ | ❌ | | [SEW-D](model_doc/sew-d) | ✅ | ❌ | ❌ | | [Speech Encoder decoder](model_doc/speech-encoder-decoder) | ✅ | ❌ | ✅ | | [Speech2Text](model_doc/speech_to_text) | ✅ | ✅ | ❌ | | [SpeechT5](model_doc/speecht5) | ✅ | ❌ | ❌ | | [Splinter](model_doc/splinter) | ✅ | ❌ | ❌ | | [SqueezeBERT](model_doc/squeezebert) | ✅ | ❌ | ❌ | | [SwiftFormer](model_doc/swiftformer) | ✅ | ❌ | ❌ | | [Swin Transformer](model_doc/swin) | ✅ | ✅ | ❌ | | [Swin Transformer V2](model_doc/swinv2) | ✅ | ❌ | ❌ | | [Swin2SR](model_doc/swin2sr) | ✅ | ❌ | ❌ | | [SwitchTransformers](model_doc/switch_transformers) | ✅ | ❌ | ❌ | | [T5](model_doc/t5) | ✅ | ✅ | ✅ | | [T5v1.1](model_doc/t5v1.1) | ✅ | ✅ | ✅ | | [Table Transformer](model_doc/table-transformer) | ✅ | ❌ | ❌ | | [TAPAS](model_doc/tapas) | ✅ | ✅ | ❌ | | [TAPEX](model_doc/tapex) | ✅ | ✅ | ✅ | | [Time Series Transformer](model_doc/time_series_transformer) | ✅ | ❌ | ❌ | | [TimeSformer](model_doc/timesformer) | ✅ | ❌ | ❌ | | [Trajectory Transformer](model_doc/trajectory_transformer) | ✅ | ❌ | ❌ | | [Transformer-XL](model_doc/transfo-xl) | ✅ | ✅ | ❌ | | [TrOCR](model_doc/trocr) | ✅ | ❌ | ❌ | | [TVLT](model_doc/tvlt) | ✅ | ❌ | ❌ | | [TVP](model_doc/tvp) | ✅ | ❌ | ❌ | | [UL2](model_doc/ul2) | ✅ | ✅ | ✅ | | [UMT5](model_doc/umt5) | ✅ | ❌ | ❌ | | [UniSpeech](model_doc/unispeech) | ✅ | ❌ | ❌ | | [UniSpeechSat](model_doc/unispeech-sat) | ✅ | ❌ | ❌ | | [UnivNet](model_doc/univnet) | ✅ | ❌ | ❌ | | [UPerNet](model_doc/upernet) | ✅ | ❌ | ❌ | | [VAN](model_doc/van) | ✅ | ❌ | ❌ | | [VideoMAE](model_doc/videomae) | ✅ | ❌ | ❌ | | [ViLT](model_doc/vilt) | ✅ | ❌ | ❌ | | [VipLlava](model_doc/vipllava) | ✅ | ❌ | ❌ | | [Vision Encoder decoder](model_doc/vision-encoder-decoder) | ✅ | ✅ | ✅ | | [VisionTextDualEncoder](model_doc/vision-text-dual-encoder) | ✅ | ✅ | ✅ | | [VisualBERT](model_doc/visual_bert) | ✅ | ❌ | ❌ | | [ViT](model_doc/vit) | ✅ | ✅ | ✅ | | [ViT Hybrid](model_doc/vit_hybrid) | ✅ | ❌ | ❌ | | [VitDet](model_doc/vitdet) | ✅ | ❌ | ❌ | | [ViTMAE](model_doc/vit_mae) | ✅ | ✅ | ❌ | | [ViTMatte](model_doc/vitmatte) | ✅ | ❌ | ❌ | | [ViTMSN](model_doc/vit_msn) | ✅ | ❌ | ❌ | | [VITS](model_doc/vits) | ✅ | ❌ | ❌ | | [ViViT](model_doc/vivit) | ✅ | ❌ | ❌ | | [Wav2Vec2](model_doc/wav2vec2) | ✅ | ✅ | ✅ | | [Wav2Vec2-Conformer](model_doc/wav2vec2-conformer) | ✅ | ❌ | ❌ | | [Wav2Vec2Phoneme](model_doc/wav2vec2_phoneme) | ✅ | ✅ | ✅ | | [WavLM](model_doc/wavlm) | ✅ | ❌ | ❌ | | [Whisper](model_doc/whisper) | ✅ | ✅ | ✅ | | [X-CLIP](model_doc/xclip) | ✅ | ❌ | ❌ | | [X-MOD](model_doc/xmod) | ✅ | ❌ | ❌ | | [XGLM](model_doc/xglm) | ✅ | ✅ | ✅ | | [XLM](model_doc/xlm) | ✅ | ✅ | ❌ | | [XLM-ProphetNet](model_doc/xlm-prophetnet) | ✅ | ❌ | ❌ | | [XLM-RoBERTa](model_doc/xlm-roberta) | ✅ | ✅ | ✅ | | [XLM-RoBERTa-XL](model_doc/xlm-roberta-xl) | ✅ | ❌ | ❌ | | [XLM-V](model_doc/xlm-v) | ✅ | ✅ | ✅ | | [XLNet](model_doc/xlnet) | ✅ | ✅ | ❌ | | [XLS-R](model_doc/xls_r) | ✅ | ✅ | ✅ | | [XLSR-Wav2Vec2](model_doc/xlsr_wav2vec2) | ✅ | ✅ | ✅ | | [YOLOS](model_doc/yolos) | ✅ | ❌ | ❌ | | [YOSO](model_doc/yoso) | ✅ | ❌ | ❌ | <!-- End table-->
huggingface/transformers/blob/main/docs/source/en/index.md
Key Features Let's go through some of the key features of Gradio. This guide is intended to be a high-level overview of various things that you should be aware of as you build your demo. Where appropriate, we link to more detailed guides on specific topics. 1. [Components](#components) 2. [Queuing](#queuing) 3. [Streaming outputs](#streaming-outputs) 4. [Streaming inputs](#streaming-inputs) 5. [Alert modals](#alert-modals) 6. [Styling](#styling) 7. [Progress bars](#progress-bars) 8. [Batch functions](#batch-functions) ## Components Gradio includes more than 30 pre-built components (as well as many user-built _custom components_) that can be used as inputs or outputs in your demo with a single line of code. These components correspond to common data types in machine learning and data science, e.g. the `gr.Image` component is designed to handle input or output images, the `gr.Label` component displays classification labels and probabilities, the `gr.Plot` component displays various kinds of plots, and so on. Each component includes various constructor attributes that control the properties of the component. For example, you can control the number of lines in a `gr.Textbox` using the `lines` argument (which takes a positive integer) in its constructor. Or you can control the way that a user can provide an image in the `gr.Image` component using the `sources` parameter (which takes a list like `["webcam", "upload"]`). **Static and Interactive Components** Every component has a _static_ version that is designed to *display* data, and most components also have an _interactive_ version designed to let users input or modify the data. Typically, you don't need to think about this distinction, because when you build a Gradio demo, Gradio automatically figures out whether the component should be static or interactive based on whether it is being used as an input or output. However, you can set this manually using the `interactive` argument that every component supports. **Preprocessing and Postprocessing** When a component is used as an input, Gradio automatically handles the _preprocessing_ needed to convert the data from a type sent by the user's browser (such as an uploaded image) to a form that can be accepted by your function (such as a `numpy` array). Similarly, when a component is used as an output, Gradio automatically handles the _postprocessing_ needed to convert the data from what is returned by your function (such as a list of image paths) to a form that can be displayed in the user's browser (a gallery of images). Consider an example demo with three input components (`gr.Textbox`, `gr.Number`, and `gr.Image`) and two outputs (`gr.Number` and `gr.Gallery`) that serve as a UI for your image-to-image generation model. Below is a diagram of what our preprocessing will send to the model and what our postprocessing will require from it. ![](https://github.com/gradio-app/gradio/blob/main/guides/assets/dataflow.svg?raw=true) In this image, the following preprocessing steps happen to send the data from the browser to your function: * The text in the textbox is converted to a Python `str` (essentially no preprocessing) * The number in the number input in converted to a Python `float` (essentially no preprocessing) * Most importantly, ihe image supplied by the user is converted to a `numpy.array` representation of the RGB values in the image Images are converted to NumPy arrays because they are a common format for machine learning workflows. You can control the _preprocessing_ using the component's parameters when constructing the component. For example, if you instantiate the `Image` component with the following parameters, it will preprocess the image to the `PIL` format instead: ```py img = gr.Image(type="pil") ``` Postprocessing is even simpler! Gradio automatically recognizes the format of the returned data (e.g. does the user's function return a `numpy` array or a `str` filepath for the `gr.Image` component?) and postprocesses it appropriately into a format that can be displayed by the browser. So in the image above, the following postprocessing steps happen to send the data returned from a user's function to the browser: * The `float` is displayed as a number and displayed directly to the user * The list of string filepaths (`list[str]`) is interpreted as a list of image filepaths and displayed as a gallery in the browser Take a look at the [Docs](https://gradio.app/docs) to see all the parameters for each Gradio component. ## Queuing Every Gradio app comes with a built-in queuing system that can scale to thousands of concurrent users. You can configure the queue by using `queue()` method which is supported by the `gr.Interface`, `gr.Blocks`, and `gr.ChatInterface` classes. For example, you can control the number of requests processed at a single time by setting the `default_concurrency_limit` parameter of `queue()`, e.g. ```python demo = gr.Interface(...).queue(default_concurrency_limit=5) demo.launch() ``` This limits the number of requests processed for this event listener at a single time to 5. By default, the `default_concurrency_limit` is actually set to `1`, which means that when many users are using your app, only a single user's request will be processed at a time. This is because many machine learning functions consume a significant amount of memory and so it is only suitable to have a single user using the demo at a time. However, you can change this parameter in your demo easily. See the [docs on queueing](/docs/interface#interface-queue) for more details on configuring the queuing parameters. ## Streaming outputs In some cases, you may want to stream a sequence of outputs rather than show a single output at once. For example, you might have an image generation model and you want to show the image that is generated at each step, leading up to the final image. Or you might have a chatbot which streams its response one token at a time instead of returning it all at once. In such cases, you can supply a **generator** function into Gradio instead of a regular function. Creating generators in Python is very simple: instead of a single `return` value, a function should `yield` a series of values instead. Usually the `yield` statement is put in some kind of loop. Here's an example of an generator that simply counts up to a given number: ```python def my_generator(x): for i in range(x): yield i ``` You supply a generator into Gradio the same way as you would a regular function. For example, here's a a (fake) image generation model that generates noise for several steps before outputting an image using the `gr.Interface` class: $code_fake_diffusion $demo_fake_diffusion Note that we've added a `time.sleep(1)` in the iterator to create an artificial pause between steps so that you are able to observe the steps of the iterator (in a real image generation model, this probably wouldn't be necessary). ## Streaming inputs Similarly, Gradio can handle streaming inputs, e.g. a live audio stream that can gets transcribed to text in real time, or an image generation model that reruns every time a user types a letter in a textbox. This is covered in more details in our guide on building [reactive Interfaces](/guides/reactive-interfaces). ## Alert modals You may wish to raise alerts to the user. To do so, raise a `gr.Error("custom message")` to display an error message. You can also issue `gr.Warning("message")` and `gr.Info("message")` by having them as standalone lines in your function, which will immediately display modals while continuing the execution of your function. Queueing needs to be enabled for this to work. Note below how the `gr.Error` has to be raised, while the `gr.Warning` and `gr.Info` are single lines. ```python def start_process(name): gr.Info("Starting process") if name is None: gr.Warning("Name is empty") ... if success == False: raise gr.Error("Process failed") ``` ## Styling Gradio themes are the easiest way to customize the look and feel of your app. You can choose from a variety of themes, or create your own. To do so, pass the `theme=` kwarg to the `Interface` constructor. For example: ```python demo = gr.Interface(..., theme=gr.themes.Monochrome()) ``` Gradio comes with a set of prebuilt themes which you can load from `gr.themes.*`. You can extend these themes or create your own themes from scratch - see the [theming guide](https://gradio.app/guides/theming-guide) for more details. For additional styling ability, you can pass any CSS (as well as custom JavaScript) to your Gradio application. This is discussed in more detail in our [custom JS and CSS guide](/guides/custom-CSS-and-JS). ## Progress bars Gradio supports the ability to create custom Progress Bars so that you have customizability and control over the progress update that you show to the user. In order to enable this, simply add an argument to your method that has a default value of a `gr.Progress` instance. Then you can update the progress levels by calling this instance directly with a float between 0 and 1, or using the `tqdm()` method of the `Progress` instance to track progress over an iterable, as shown below. $code_progress_simple $demo_progress_simple If you use the `tqdm` library, you can even report progress updates automatically from any `tqdm.tqdm` that already exists within your function by setting the default argument as `gr.Progress(track_tqdm=True)`! ## Batch functions Gradio supports the ability to pass _batch_ functions. Batch functions are just functions which take in a list of inputs and return a list of predictions. For example, here is a batched function that takes in two lists of inputs (a list of words and a list of ints), and returns a list of trimmed words as output: ```py import time def trim_words(words, lens): trimmed_words = [] time.sleep(5) for w, l in zip(words, lens): trimmed_words.append(w[:int(l)]) return [trimmed_words] ``` The advantage of using batched functions is that if you enable queuing, the Gradio server can automatically _batch_ incoming requests and process them in parallel, potentially speeding up your demo. Here's what the Gradio code looks like (notice the `batch=True` and `max_batch_size=16`) With the `gr.Interface` class: ```python demo = gr.Interface( fn=trim_words, inputs=["textbox", "number"], outputs=["output"], batch=True, max_batch_size=16 ) demo.launch() ``` With the `gr.Blocks` class: ```py import gradio as gr with gr.Blocks() as demo: with gr.Row(): word = gr.Textbox(label="word") leng = gr.Number(label="leng") output = gr.Textbox(label="Output") with gr.Row(): run = gr.Button() event = run.click(trim_words, [word, leng], output, batch=True, max_batch_size=16) demo.launch() ``` In the example above, 16 requests could be processed in parallel (for a total inference time of 5 seconds), instead of each request being processed separately (for a total inference time of 80 seconds). Many Hugging Face `transformers` and `diffusers` models work very naturally with Gradio's batch mode: here's [an example demo using diffusers to generate images in batches](https://github.com/gradio-app/gradio/blob/main/demo/diffusers_with_batching/run.py)
gradio-app/gradio/blob/main/guides/01_getting-started/02_key-features.md
Gradio Demo: file_explorer_component ``` !pip install -q gradio ``` ``` import gradio as gr with gr.Blocks() as demo: gr.FileExplorer() demo.launch() ```
gradio-app/gradio/blob/main/demo/file_explorer_component/run.ipynb
!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Adapters Adapter-based methods add extra trainable parameters after the attention and fully-connected layers of a frozen pretrained model to reduce memory-usage and speed up training. The method varies depending on the adapter, it could simply be an extra added layer or it could be expressing the weight updates ∆W as a low-rank decomposition of the weight matrix. Either way, the adapters are typically small but demonstrate comparable performance to a fully finetuned model and enable training larger models with fewer resources. This guide will give you a brief overview of the adapter methods supported by PEFT (if you're interested in learning more details about a specific method, take a look at the linked paper). ## Low-Rank Adaptation (LoRA) <Tip> LoRA is one of the most popular PEFT methods and a good starting point if you're just getting started with PEFT. It was originally developed for large language models but it is a tremendously popular training method for diffusion models because of its efficiency and effectiveness. </Tip> As mentioned briefly earlier, [LoRA](https://hf.co/papers/2106.09685) is a technique that accelerates finetuning large models while consuming less memory. LoRA represents the weight updates ∆W with two smaller matrices (called *update matrices*) through low-rank decomposition. These new matrices can be trained to adapt to the new data while keeping the overall number of parameters low. The original weight matrix remains frozen and doesn't receive any further updates. To produce the final results, the original and extra adapted weights are combined. You could also merge the adapter weights with the base model to eliminate inference latency. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/lora_animated.gif"/> </div> This approach has a number of advantages: * LoRA makes finetuning more efficient by drastically reducing the number of trainable parameters. * The original pretrained weights are kept frozen, which means you can have multiple lightweight and portable LoRA models for various downstream tasks built on top of them. * LoRA is orthogonal to other parameter-efficient methods and can be combined with many of them. * Performance of models finetuned using LoRA is comparable to the performance of fully finetuned models. In principle, LoRA can be applied to any subset of weight matrices in a neural network to reduce the number of trainable parameters. However, for simplicity and further parameter efficiency, LoRA is typically only applied to the attention blocks in Transformer models. The resulting number of trainable parameters in a LoRA model depends on the size of the update matrices, which is determined mainly by the rank `r` and the shape of the original weight matrix. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/lora.png"/> </div> <small><a href="https://hf.co/papers/2103.10385">Navigating Text-To-Image Customization: From LyCORIS Fine-Tuning to Model Evaluation</a></small> ## Low-Rank Hadamard Product (LoHa) Low-rank decomposition can impact performance because the weight updates are limited to the low-rank space, which can constrain a model's expressiveness. However, you don't necessarily want to use a larger rank because it increases the number of trainable parameters. To address this, [LoHa](https://huggingface.co/papers/2108.06098) (a method originally developed for computer vision) was applied to diffusion models where the ability to generate diverse images is an important consideration. LoHa should also work with general model types, but the embedding layers aren't currently implemented in PEFT. LoHa uses the [Hadamard product](https://en.wikipedia.org/wiki/Hadamard_product_(matrices)) (element-wise product) instead of the matrix product. ∆W is represented by four smaller matrices instead of two - like in LoRA - and each pair of these low-rank matrices are combined with the Hadamard product. As a result, ∆W can have the same number of trainable parameters but a higher rank and expressivity. ## Low-Rank Kronecker Product (LoKr) [LoKr](https://hf.co/papers/2309.14859) is very similar to LoRA and LoHa, and it is also mainly applied to diffusion models, though you could also use it with other model types. LoKr replaces the matrix product with the [Kronecker product](https://en.wikipedia.org/wiki/Kronecker_product) instead. The Kronecker product decomposition creates a block matrix which preserves the rank of the original weight matrix. Another benefit of the Kronecker product is that it can be vectorized by stacking the matrix columns. This can speed up the process because you're avoiding fully reconstructing ∆W. ## Orthogonal Finetuning (OFT) <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/oft.png"/> </div> <small><a href="https://hf.co/papers/2306.07280">Controlling Text-to-Image Diffusion by Orthogonal Finetuning</a></small> [OFT](https://hf.co/papers/2306.07280) is a method that primarily focuses on preserving a pretrained model's generative performance in the finetuned model. It tries to maintain the same cosine similarity (hyperspherical energy) between all pairwise neurons in a layer because this better captures the semantic information among neurons. This means OFT is more capable at preserving the subject and it is better for controllable generation (similar to [ControlNet](https://huggingface.co/docs/diffusers/using-diffusers/controlnet)). OFT preserves the hyperspherical energy by learning an orthogonal transformation for neurons to keep the cosine similarity between them unchanged. In practice, this means taking the matrix product of an orthogonal matrix with the pretrained weight matrix. However, to be parameter-efficient, the orthogonal matrix is represented as a block-diagonal matrix with rank `r` blocks. Whereas LoRA reduces the number of trainable parameters with low-rank structures, OFT reduces the number of trainable parameters with a sparse block-diagonal matrix structure. ## Adaptive Low-Rank Adaptation (AdaLoRA) [AdaLoRA](https://hf.co/papers/2303.10512) manages the parameter budget introduced from LoRA by allocating more parameters - in other words, a higher rank `r` - for important weight matrices that are better adapted for a task and pruning less important ones. The rank is controlled by a method similar to singular value decomposition (SVD). The ∆W is parameterized with two orthogonal matrices and a diagonal matrix which contains singular values. This parametrization method avoids iteratively applying SVD which is computationally expensive. Based on this method, the rank of ∆W is adjusted according to an importance score. ∆W is divided into triplets and each triplet is scored according to its contribution to model performance. Triplets with low importance scores are pruned and triplets with high importance scores are kept for finetuning. ## Llama-Adapter [Llama-Adapter](https://hf.co/papers/2303.16199) is a method for adapting Llama into a instruction-following model. To help adapt the model for instruction-following, the adapter is trained with a 52K instruction-output dataset. A set of of learnable adaption prompts are prefixed to the input instruction tokens. These are inserted into the upper layers of the model because it is better to learn with the higher-level semantics of the pretrained model. The instruction-output tokens prefixed to the input guide the adaption prompt to generate a contextual response. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/llama-adapter.png"/> </div> <small><a href="https://hf.co/papers/2303.16199">LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention</a></small> To avoid adding noise to the tokens, the adapter uses zero-initialized attention. On top of this, the adapter adds a learnable gating factor (initialized with zeros) to progressively add information to the model during training. This prevents overwhelming the model's pretrained knowledge with the newly learned instructions.
huggingface/peft/blob/main/docs/source/conceptual_guides/adapter.md
Godot RL Agents [Godot RL Agents](https://github.com/edbeeching/godot_rl_agents) is an Open Source package that allows video game creators, AI researchers, and hobbyists the opportunity **to learn complex behaviors for their Non Player Characters or agents**. The library provides: - An interface between games created in the [Godot Engine](https://godotengine.org/) and Machine Learning algorithms running in Python - Wrappers for four well known rl frameworks: [StableBaselines3](https://stable-baselines3.readthedocs.io/en/master/), [CleanRL](https://docs.cleanrl.dev/), [Sample Factory](https://www.samplefactory.dev/) and [Ray RLLib](https://docs.ray.io/en/latest/rllib-algorithms.html) - Support for memory-based agents with LSTM or attention based interfaces - Support for *2D and 3D games* - A suite of *AI sensors* to augment your agent's capacity to observe the game world - Godot and Godot RL Agents are **completely free and open source under a very permissive MIT license**. No strings attached, no royalties, nothing. You can find out more about Godot RL agents on their [GitHub page](https://github.com/edbeeching/godot_rl_agents) or their AAAI-2022 Workshop [paper](https://arxiv.org/abs/2112.03636). The library's creator, [Ed Beeching](https://edbeeching.github.io/), is a Research Scientist here at Hugging Face. Installation of the library is simple: `pip install godot-rl` ## Create a custom RL environment with Godot RL Agents In this section, you will **learn how to create a custom environment in the Godot Game Engine** and then implement an AI controller that learns to play with Deep Reinforcement Learning. The example game we create today is simple, **but shows off many of the features of the Godot Engine and the Godot RL Agents library**. You can then dive into the examples for more complex environments and behaviors. The environment we will be building today is called Ring Pong, the game of pong but the pitch is a ring and the paddle moves around the ring. The **objective is to keep the ball bouncing inside the ring**. <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit9/ringpong.gif" alt="Ring Pong"> ### Installing the Godot Game Engine The [Godot game engine](https://godotengine.org/) is an open source tool for the **creation of video games, tools and user interfaces**. Godot Engine is a feature-packed, cross-platform game engine designed to create 2D and 3D games from a unified interface. It provides a comprehensive set of common tools, so users **can focus on making games without having to reinvent the wheel**. Games can be exported in one click to a number of platforms, including the major desktop platforms (Linux, macOS, Windows) as well as mobile (Android, iOS) and web-based (HTML5) platforms. While we will guide you through the steps to implement your agent, you may wish to learn more about the Godot Game Engine. Their [documentation](https://docs.godotengine.org/en/latest/index.html) is thorough, and there are many tutorials on YouTube we would also recommend [GDQuest](https://www.gdquest.com/), [KidsCanCode](https://kidscancode.org/godot_recipes/4.x/) and [Bramwell](https://www.youtube.com/channel/UCczi7Aq_dTKrQPF5ZV5J3gg) as sources of information. In order to create games in Godot, **you must first download the editor**. Godot RL Agents supports the latest version of Godot, Godot 4.0. Which can be downloaded at the following links: - [Windows](https://downloads.tuxfamily.org/godotengine/4.0.1/Godot_v4.0.1-stable_win64.exe.zip) - [Mac](https://downloads.tuxfamily.org/godotengine/4.0.1/Godot_v4.0.1-stable_macos.universal.zip) - [Linux](https://downloads.tuxfamily.org/godotengine/4.0.1/Godot_v4.0.1-stable_linux.x86_64.zip) ### Loading the starter project We provide two versions of the codebase: - [A starter project, to download and follow along for this tutorial](https://drive.google.com/file/d/1C7xd3TibJHlxFEJPBgBLpksgxrFZ3D8e/view?usp=share_link) - [A final version of the project, for comparison and debugging.](https://drive.google.com/file/d/1k-b2Bu7uIA6poApbouX4c3sq98xqogpZ/view?usp=share_link) To load the project, in the Godot Project Manager click **Import**, navigate to where the files are located and load the **project.godot** file. If you press F5 or play in the editor, you should be able to play the game in human mode. There are several instances of the game running, this is because we want to speed up training our AI agent with many parallel environments. ### Installing the Godot RL Agents plugin The Godot RL Agents plugin can be installed from the Github repo or with the Godot Asset Lib in the editor. First click on the AssetLib and search for “rl” <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit9/godot1.png" alt="Godot"> Then click on Godot RL Agents, click Download and unselect the LICIENSE and [README.md](http://README.md) files. Then click install. <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit9/godot2.png" alt="Godot"> The Godot RL Agents plugin is now downloaded to your machine your machine. Now click on Project → Project settings and enable the addon: <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit9/godot3.png" alt="Godot"> ### Adding the AI controller We now want to add an AI controller to our game. Open the player.tscn scene, on the left you should see a hierarchy of nodes that looks like this: <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit9/godot4.png" alt="Godot"> Right click the **Player** node and click **Add Child Node.** There are many nodes listed here, search for AIController3D and create it. <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit9/godot5.png" alt="Godot"> The AI Controller Node should have been added to the scene tree, next to it is a scroll. Click on it to open the script that is attached to the AIController. The Godot game engine uses a scripting language called GDScript, which is syntactically similar to python. The script contains methods that need to be implemented in order to get our AI controller working. ```python #-- Methods that need implementing using the "extend script" option in Godot --# func get_obs() -> Dictionary: assert(false, "the get_obs method is not implemented when extending from ai_controller") return {"obs":[]} func get_reward() -> float: assert(false, "the get_reward method is not implemented when extending from ai_controller") return 0.0 func get_action_space() -> Dictionary: assert(false, "the get get_action_space method is not implemented when extending from ai_controller") return { "example_actions_continous" : { "size": 2, "action_type": "continuous" }, "example_actions_discrete" : { "size": 2, "action_type": "discrete" }, } func set_action(action) -> void: assert(false, "the get set_action method is not implemented when extending from ai_controller") # -----------------------------------------------------------------------------# ``` In order to implement these methods, we will need to create a class that inherits from AIController3D. This is easy to do in Godot, and is called “extending” a class. Right click the AIController3D Node and click “Extend Script” and call the new script `controller.gd`. You should now have an almost empty script file that looks like this: ```python extends AIController3D # Called when the node enters the scene tree for the first time. func _ready(): pass # Replace with function body. # Called every frame. 'delta' is the elapsed time since the previous frame. func _process(delta): pass ``` We will now implement the 4 missing methods, delete this code, and replace it with the following: ```python extends AIController3D # Stores the action sampled for the agent's policy, running in python var move_action : float = 0.0 func get_obs() -> Dictionary: # get the balls position and velocity in the paddle's frame of reference var ball_pos = to_local(_player.ball.global_position) var ball_vel = to_local(_player.ball.linear_velocity) var obs = [ball_pos.x, ball_pos.z, ball_vel.x/10.0, ball_vel.z/10.0] return {"obs":obs} func get_reward() -> float: return reward func get_action_space() -> Dictionary: return { "move_action" : { "size": 1, "action_type": "continuous" }, } func set_action(action) -> void: move_action = clamp(action["move_action"][0], -1.0, 1.0) ``` We have now defined the agent’s observation, which is the position and velocity of the ball in its local cooridinate space. We have also defined the action space of the agent, which is a single contuninous value ranging from -1 to +1. The next step is to update the Player’s script to use the actions from the AIController, edit the Player’s script by clicking on the scroll next to the player node, update the code in `Player.gd` to the following the following: ```python extends Node3D @export var rotation_speed = 3.0 @onready var ball = get_node("../Ball") @onready var ai_controller = $AIController3D func _ready(): ai_controller.init(self) func game_over(): ai_controller.done = true ai_controller.needs_reset = true func _physics_process(delta): if ai_controller.needs_reset: ai_controller.reset() ball.reset() return var movement : float if ai_controller.heuristic == "human": movement = Input.get_axis("rotate_anticlockwise", "rotate_clockwise") else: movement = ai_controller.move_action rotate_y(movement*delta*rotation_speed) func _on_area_3d_body_entered(body): ai_controller.reward += 1.0 ``` We now need to synchronize between the game running in Godot and the neural network being trained in Python. Godot RL agents provides a node that does just that. Open the train.tscn scene, right click on the root node, and click “Add child node”. Then, search for “sync” and add a Godot RL Agents Sync node. This node handles the communication between Python and Godot over TCP. You can run training live in the the editor, by first launching the python training with `gdrl` In this simple example, a reasonable policy is learned in several minutes. You may wish to speed up training, click on the Sync node in the train scene and you will see there is a “Speed Up” property exposed in the editor: <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit9/godot6.png" alt="Godot"> Try setting this property up to 8 to speed up training. This can be a great benefit on more complex environments, like the multi-player FPS we will learn about in the next chapter. ### There’s more! We have only scratched the surface of what can be achieved with Godot RL Agents, the library includes custom sensors and cameras to enrich the information available to the agent. Take a look at the [examples](https://github.com/edbeeching/godot_rl_agents_examples) to find out more! ## Author This section was written by <a href="https://twitter.com/edwardbeeching">Edward Beeching</a>
huggingface/deep-rl-class/blob/main/units/en/unitbonus3/godotrl.mdx
!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # DDIMScheduler [Denoising Diffusion Implicit Models](https://huggingface.co/papers/2010.02502) (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon. The abstract from the paper is: *Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps to produce a sample. To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process. We construct a class of non-Markovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from. We empirically demonstrate that DDIMs can produce high quality samples 10× to 50× faster in terms of wall-clock time compared to DDPMs, allow us to trade off computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space.* The original codebase of this paper can be found at [ermongroup/ddim](https://github.com/ermongroup/ddim), and you can contact the author on [tsong.me](https://tsong.me/). ## Tips The paper [Common Diffusion Noise Schedules and Sample Steps are Flawed](https://huggingface.co/papers/2305.08891) claims that a mismatch between the training and inference settings leads to suboptimal inference generation results for Stable Diffusion. To fix this, the authors propose: <Tip warning={true}> 🧪 This is an experimental feature! </Tip> 1. rescale the noise schedule to enforce zero terminal signal-to-noise ratio (SNR) ```py pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config, rescale_betas_zero_snr=True) ``` 2. train a model with `v_prediction` (add the following argument to the [train_text_to_image.py](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image.py) or [train_text_to_image_lora.py](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_lora.py) scripts) ```bash --prediction_type="v_prediction" ``` 3. change the sampler to always start from the last timestep ```py pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config, timestep_spacing="trailing") ``` 4. rescale classifier-free guidance to prevent over-exposure ```py image = pipe(prompt, guidance_rescale=0.7).images[0] ``` For example: ```py from diffusers import DiffusionPipeline, DDIMScheduler import torch pipe = DiffusionPipeline.from_pretrained("ptx0/pseudo-journey-v2", torch_dtype=torch.float16) pipe.scheduler = DDIMScheduler.from_config( pipe.scheduler.config, rescale_betas_zero_snr=True, timestep_spacing="trailing" ) pipe.to("cuda") prompt = "A lion in galaxies, spirals, nebulae, stars, smoke, iridescent, intricate detail, octane render, 8k" image = pipe(prompt, guidance_rescale=0.7).images[0] image ``` ## DDIMScheduler [[autodoc]] DDIMScheduler ## DDIMSchedulerOutput [[autodoc]] schedulers.scheduling_ddim.DDIMSchedulerOutput
huggingface/diffusers/blob/main/docs/source/en/api/schedulers/ddim.md
!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Export to ONNX Deploying 🤗 Transformers models in production environments often requires, or can benefit from exporting the models into a serialized format that can be loaded and executed on specialized runtimes and hardware. 🤗 Optimum is an extension of Transformers that enables exporting models from PyTorch or TensorFlow to serialized formats such as ONNX and TFLite through its `exporters` module. 🤗 Optimum also provides a set of performance optimization tools to train and run models on targeted hardware with maximum efficiency. This guide demonstrates how you can export 🤗 Transformers models to ONNX with 🤗 Optimum, for the guide on exporting models to TFLite, please refer to the [Export to TFLite page](tflite). ## Export to ONNX [ONNX (Open Neural Network eXchange)](http://onnx.ai) is an open standard that defines a common set of operators and a common file format to represent deep learning models in a wide variety of frameworks, including PyTorch and TensorFlow. When a model is exported to the ONNX format, these operators are used to construct a computational graph (often called an _intermediate representation_) which represents the flow of data through the neural network. By exposing a graph with standardized operators and data types, ONNX makes it easy to switch between frameworks. For example, a model trained in PyTorch can be exported to ONNX format and then imported in TensorFlow (and vice versa). Once exported to ONNX format, a model can be: - optimized for inference via techniques such as [graph optimization](https://huggingface.co/docs/optimum/onnxruntime/usage_guides/optimization) and [quantization](https://huggingface.co/docs/optimum/onnxruntime/usage_guides/quantization). - run with ONNX Runtime via [`ORTModelForXXX` classes](https://huggingface.co/docs/optimum/onnxruntime/package_reference/modeling_ort), which follow the same `AutoModel` API as the one you are used to in 🤗 Transformers. - run with [optimized inference pipelines](https://huggingface.co/docs/optimum/main/en/onnxruntime/usage_guides/pipelines), which has the same API as the [`pipeline`] function in 🤗 Transformers. 🤗 Optimum provides support for the ONNX export by leveraging configuration objects. These configuration objects come ready-made for a number of model architectures, and are designed to be easily extendable to other architectures. For the list of ready-made configurations, please refer to [🤗 Optimum documentation](https://huggingface.co/docs/optimum/exporters/onnx/overview). There are two ways to export a 🤗 Transformers model to ONNX, here we show both: - export with 🤗 Optimum via CLI. - export with 🤗 Optimum with `optimum.onnxruntime`. ### Exporting a 🤗 Transformers model to ONNX with CLI To export a 🤗 Transformers model to ONNX, first install an extra dependency: ```bash pip install optimum[exporters] ``` To check out all available arguments, refer to the [🤗 Optimum docs](https://huggingface.co/docs/optimum/exporters/onnx/usage_guides/export_a_model#exporting-a-model-to-onnx-using-the-cli), or view help in command line: ```bash optimum-cli export onnx --help ``` To export a model's checkpoint from the 🤗 Hub, for example, `distilbert-base-uncased-distilled-squad`, run the following command: ```bash optimum-cli export onnx --model distilbert-base-uncased-distilled-squad distilbert_base_uncased_squad_onnx/ ``` You should see the logs indicating progress and showing where the resulting `model.onnx` is saved, like this: ```bash Validating ONNX model distilbert_base_uncased_squad_onnx/model.onnx... -[✓] ONNX model output names match reference model (start_logits, end_logits) - Validating ONNX Model output "start_logits": -[✓] (2, 16) matches (2, 16) -[✓] all values close (atol: 0.0001) - Validating ONNX Model output "end_logits": -[✓] (2, 16) matches (2, 16) -[✓] all values close (atol: 0.0001) The ONNX export succeeded and the exported model was saved at: distilbert_base_uncased_squad_onnx ``` The example above illustrates exporting a checkpoint from 🤗 Hub. When exporting a local model, first make sure that you saved both the model's weights and tokenizer files in the same directory (`local_path`). When using CLI, pass the `local_path` to the `model` argument instead of the checkpoint name on 🤗 Hub and provide the `--task` argument. You can review the list of supported tasks in the [🤗 Optimum documentation](https://huggingface.co/docs/optimum/exporters/task_manager). If `task` argument is not provided, it will default to the model architecture without any task specific head. ```bash optimum-cli export onnx --model local_path --task question-answering distilbert_base_uncased_squad_onnx/ ``` The resulting `model.onnx` file can then be run on one of the [many accelerators](https://onnx.ai/supported-tools.html#deployModel) that support the ONNX standard. For example, we can load and run the model with [ONNX Runtime](https://onnxruntime.ai/) as follows: ```python >>> from transformers import AutoTokenizer >>> from optimum.onnxruntime import ORTModelForQuestionAnswering >>> tokenizer = AutoTokenizer.from_pretrained("distilbert_base_uncased_squad_onnx") >>> model = ORTModelForQuestionAnswering.from_pretrained("distilbert_base_uncased_squad_onnx") >>> inputs = tokenizer("What am I using?", "Using DistilBERT with ONNX Runtime!", return_tensors="pt") >>> outputs = model(**inputs) ``` The process is identical for TensorFlow checkpoints on the Hub. For instance, here's how you would export a pure TensorFlow checkpoint from the [Keras organization](https://huggingface.co/keras-io): ```bash optimum-cli export onnx --model keras-io/transformers-qa distilbert_base_cased_squad_onnx/ ``` ### Exporting a 🤗 Transformers model to ONNX with `optimum.onnxruntime` Alternative to CLI, you can export a 🤗 Transformers model to ONNX programmatically like so: ```python >>> from optimum.onnxruntime import ORTModelForSequenceClassification >>> from transformers import AutoTokenizer >>> model_checkpoint = "distilbert_base_uncased_squad" >>> save_directory = "onnx/" >>> # Load a model from transformers and export it to ONNX >>> ort_model = ORTModelForSequenceClassification.from_pretrained(model_checkpoint, export=True) >>> tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) >>> # Save the onnx model and tokenizer >>> ort_model.save_pretrained(save_directory) >>> tokenizer.save_pretrained(save_directory) ``` ### Exporting a model for an unsupported architecture If you wish to contribute by adding support for a model that cannot be currently exported, you should first check if it is supported in [`optimum.exporters.onnx`](https://huggingface.co/docs/optimum/exporters/onnx/overview), and if it is not, [contribute to 🤗 Optimum](https://huggingface.co/docs/optimum/exporters/onnx/usage_guides/contribute) directly. ### Exporting a model with `transformers.onnx` <Tip warning={true}> `tranformers.onnx` is no longer maintained, please export models with 🤗 Optimum as described above. This section will be removed in the future versions. </Tip> To export a 🤗 Transformers model to ONNX with `tranformers.onnx`, install extra dependencies: ```bash pip install transformers[onnx] ``` Use `transformers.onnx` package as a Python module to export a checkpoint using a ready-made configuration: ```bash python -m transformers.onnx --model=distilbert-base-uncased onnx/ ``` This exports an ONNX graph of the checkpoint defined by the `--model` argument. Pass any checkpoint on the 🤗 Hub or one that's stored locally. The resulting `model.onnx` file can then be run on one of the many accelerators that support the ONNX standard. For example, load and run the model with ONNX Runtime as follows: ```python >>> from transformers import AutoTokenizer >>> from onnxruntime import InferenceSession >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") >>> session = InferenceSession("onnx/model.onnx") >>> # ONNX Runtime expects NumPy arrays as input >>> inputs = tokenizer("Using DistilBERT with ONNX Runtime!", return_tensors="np") >>> outputs = session.run(output_names=["last_hidden_state"], input_feed=dict(inputs)) ``` The required output names (like `["last_hidden_state"]`) can be obtained by taking a look at the ONNX configuration of each model. For example, for DistilBERT we have: ```python >>> from transformers.models.distilbert import DistilBertConfig, DistilBertOnnxConfig >>> config = DistilBertConfig() >>> onnx_config = DistilBertOnnxConfig(config) >>> print(list(onnx_config.outputs.keys())) ["last_hidden_state"] ``` The process is identical for TensorFlow checkpoints on the Hub. For example, export a pure TensorFlow checkpoint like so: ```bash python -m transformers.onnx --model=keras-io/transformers-qa onnx/ ``` To export a model that's stored locally, save the model's weights and tokenizer files in the same directory (e.g. `local-pt-checkpoint`), then export it to ONNX by pointing the `--model` argument of the `transformers.onnx` package to the desired directory: ```bash python -m transformers.onnx --model=local-pt-checkpoint onnx/ ```
huggingface/transformers/blob/main/docs/source/en/serialization.md
Spaces Settings You can configure your Space's appearance and other settings inside the `YAML` block at the top of the **README.md** file at the root of the repository. For example, if you want to create a Space with Gradio named `Demo Space` with a yellow to orange gradient thumbnail: ```yaml --- title: Demo Space emoji: 🤗 colorFrom: yellow colorTo: orange sdk: gradio app_file: app.py pinned: false --- ``` For additional settings, refer to the [Reference](./spaces-config-reference) section.
huggingface/hub-docs/blob/main/docs/hub/spaces-settings.md
!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # How 🤗 Transformers solve tasks In [What 🤗 Transformers can do](task_summary), you learned about natural language processing (NLP), speech and audio, computer vision tasks, and some important applications of them. This page will look closely at how models solve these tasks and explain what's happening under the hood. There are many ways to solve a given task, some models may implement certain techniques or even approach the task from a new angle, but for Transformer models, the general idea is the same. Owing to its flexible architecture, most models are a variant of an encoder, decoder, or encoder-decoder structure. In addition to Transformer models, our library also has several convolutional neural networks (CNNs), which are still used today for computer vision tasks. We'll also explain how a modern CNN works. To explain how tasks are solved, we'll walk through what goes on inside the model to output useful predictions. - [Wav2Vec2](model_doc/wav2vec2) for audio classification and automatic speech recognition (ASR) - [Vision Transformer (ViT)](model_doc/vit) and [ConvNeXT](model_doc/convnext) for image classification - [DETR](model_doc/detr) for object detection - [Mask2Former](model_doc/mask2former) for image segmentation - [GLPN](model_doc/glpn) for depth estimation - [BERT](model_doc/bert) for NLP tasks like text classification, token classification and question answering that use an encoder - [GPT2](model_doc/gpt2) for NLP tasks like text generation that use a decoder - [BART](model_doc/bart) for NLP tasks like summarization and translation that use an encoder-decoder <Tip> Before you go further, it is good to have some basic knowledge of the original Transformer architecture. Knowing how encoders, decoders, and attention work will aid you in understanding how different Transformer models work. If you're just getting started or need a refresher, check out our [course](https://huggingface.co/course/chapter1/4?fw=pt) for more information! </Tip> ## Speech and audio [Wav2Vec2](model_doc/wav2vec2) is a self-supervised model pretrained on unlabeled speech data and finetuned on labeled data for audio classification and automatic speech recognition. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/wav2vec2_architecture.png"/> </div> This model has four main components: 1. A *feature encoder* takes the raw audio waveform, normalizes it to zero mean and unit variance, and converts it into a sequence of feature vectors that are each 20ms long. 2. Waveforms are continuous by nature, so they can't be divided into separate units like a sequence of text can be split into words. That's why the feature vectors are passed to a *quantization module*, which aims to learn discrete speech units. The speech unit is chosen from a collection of codewords, known as a *codebook* (you can think of this as the vocabulary). From the codebook, the vector or speech unit, that best represents the continuous audio input is chosen and forwarded through the model. 3. About half of the feature vectors are randomly masked, and the masked feature vector is fed to a *context network*, which is a Transformer encoder that also adds relative positional embeddings. 4. The pretraining objective of the context network is a *contrastive task*. The model has to predict the true quantized speech representation of the masked prediction from a set of false ones, encouraging the model to find the most similar context vector and quantized speech unit (the target label). Now that wav2vec2 is pretrained, you can finetune it on your data for audio classification or automatic speech recognition! ### Audio classification To use the pretrained model for audio classification, add a sequence classification head on top of the base Wav2Vec2 model. The classification head is a linear layer that accepts the encoder's hidden states. The hidden states represent the learned features from each audio frame which can have varying lengths. To create one vector of fixed-length, the hidden states are pooled first and then transformed into logits over the class labels. The cross-entropy loss is calculated between the logits and target to find the most likely class. Ready to try your hand at audio classification? Check out our complete [audio classification guide](tasks/audio_classification) to learn how to finetune Wav2Vec2 and use it for inference! ### Automatic speech recognition To use the pretrained model for automatic speech recognition, add a language modeling head on top of the base Wav2Vec2 model for [connectionist temporal classification (CTC)](glossary#connectionist-temporal-classification-ctc). The language modeling head is a linear layer that accepts the encoder's hidden states and transforms them into logits. Each logit represents a token class (the number of tokens comes from the task vocabulary). The CTC loss is calculated between the logits and targets to find the most likely sequence of tokens, which are then decoded into a transcription. Ready to try your hand at automatic speech recognition? Check out our complete [automatic speech recognition guide](tasks/asr) to learn how to finetune Wav2Vec2 and use it for inference! ## Computer vision There are two ways to approach computer vision tasks: 1. Split an image into a sequence of patches and process them in parallel with a Transformer. 2. Use a modern CNN, like [ConvNeXT](model_doc/convnext), which relies on convolutional layers but adopts modern network designs. <Tip> A third approach mixes Transformers with convolutions (for example, [Convolutional Vision Transformer](model_doc/cvt) or [LeViT](model_doc/levit)). We won't discuss those because they just combine the two approaches we examine here. </Tip> ViT and ConvNeXT are commonly used for image classification, but for other vision tasks like object detection, segmentation, and depth estimation, we'll look at DETR, Mask2Former and GLPN, respectively; these models are better suited for those tasks. ### Image classification ViT and ConvNeXT can both be used for image classification; the main difference is that ViT uses an attention mechanism while ConvNeXT uses convolutions. #### Transformer [ViT](model_doc/vit) replaces convolutions entirely with a pure Transformer architecture. If you're familiar with the original Transformer, then you're already most of the way toward understanding ViT. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/vit_architecture.jpg"/> </div> The main change ViT introduced was in how images are fed to a Transformer: 1. An image is split into square non-overlapping patches, each of which gets turned into a vector or *patch embedding*. The patch embeddings are generated from a convolutional 2D layer which creates the proper input dimensions (which for a base Transformer is 768 values for each patch embedding). If you had a 224x224 pixel image, you could split it into 196 16x16 image patches. Just like how text is tokenized into words, an image is "tokenized" into a sequence of patches. 2. A *learnable embedding* - a special `[CLS]` token - is added to the beginning of the patch embeddings just like BERT. The final hidden state of the `[CLS]` token is used as the input to the attached classification head; other outputs are ignored. This token helps the model learn how to encode a representation of the image. 3. The last thing to add to the patch and learnable embeddings are the *position embeddings* because the model doesn't know how the image patches are ordered. The position embeddings are also learnable and have the same size as the patch embeddings. Finally, all of the embeddings are passed to the Transformer encoder. 4. The output, specifically only the output with the `[CLS]` token, is passed to a multilayer perceptron head (MLP). ViT's pretraining objective is simply classification. Like other classification heads, the MLP head converts the output into logits over the class labels and calculates the cross-entropy loss to find the most likely class. Ready to try your hand at image classification? Check out our complete [image classification guide](tasks/image_classification) to learn how to finetune ViT and use it for inference! #### CNN <Tip> This section briefly explains convolutions, but it'd be helpful to have a prior understanding of how they change an image's shape and size. If you're unfamiliar with convolutions, check out the [Convolution Neural Networks chapter](https://github.com/fastai/fastbook/blob/master/13_convolutions.ipynb) from the fastai book! </Tip> [ConvNeXT](model_doc/convnext) is a CNN architecture that adopts new and modern network designs to improve performance. However, convolutions are still at the core of the model. From a high-level perspective, a [convolution](glossary#convolution) is an operation where a smaller matrix (*kernel*) is multiplied by a small window of the image pixels. It computes some features from it, such as a particular texture or curvature of a line. Then it slides over to the next window of pixels; the distance the convolution travels is known as the *stride*. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/convolution.gif"/> </div> <small>A basic convolution without padding or stride, taken from <a href="https://arxiv.org/abs/1603.07285">A guide to convolution arithmetic for deep learning.</a></small> You can feed this output to another convolutional layer, and with each successive layer, the network learns more complex and abstract things like hotdogs or rockets. Between convolutional layers, it is common to add a pooling layer to reduce dimensionality and make the model more robust to variations of a feature's position. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/convnext_architecture.png"/> </div> ConvNeXT modernizes a CNN in five ways: 1. Change the number of blocks in each stage and "patchify" an image with a larger stride and corresponding kernel size. The non-overlapping sliding window makes this patchifying strategy similar to how ViT splits an image into patches. 2. A *bottleneck* layer shrinks the number of channels and then restores it because it is faster to do a 1x1 convolution, and you can increase the depth. An inverted bottleneck does the opposite by expanding the number of channels and shrinking them, which is more memory efficient. 3. Replace the typical 3x3 convolutional layer in the bottleneck layer with *depthwise convolution*, which applies a convolution to each input channel separately and then stacks them back together at the end. This widens the network width for improved performance. 4. ViT has a global receptive field which means it can see more of an image at once thanks to its attention mechanism. ConvNeXT attempts to replicate this effect by increasing the kernel size to 7x7. 5. ConvNeXT also makes several layer design changes that imitate Transformer models. There are fewer activation and normalization layers, the activation function is switched to GELU instead of ReLU, and it uses LayerNorm instead of BatchNorm. The output from the convolution blocks is passed to a classification head which converts the outputs into logits and calculates the cross-entropy loss to find the most likely label. ### Object detection [DETR](model_doc/detr), *DEtection TRansformer*, is an end-to-end object detection model that combines a CNN with a Transformer encoder-decoder. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/detr_architecture.png"/> </div> 1. A pretrained CNN *backbone* takes an image, represented by its pixel values, and creates a low-resolution feature map of it. A 1x1 convolution is applied to the feature map to reduce dimensionality and it creates a new feature map with a high-level image representation. Since the Transformer is a sequential model, the feature map is flattened into a sequence of feature vectors that are combined with positional embeddings. 2. The feature vectors are passed to the encoder, which learns the image representations using its attention layers. Next, the encoder hidden states are combined with *object queries* in the decoder. Object queries are learned embeddings that focus on the different regions of an image, and they're updated as they progress through each attention layer. The decoder hidden states are passed to a feedforward network that predicts the bounding box coordinates and class label for each object query, or `no object` if there isn't one. DETR decodes each object query in parallel to output *N* final predictions, where *N* is the number of queries. Unlike a typical autoregressive model that predicts one element at a time, object detection is a set prediction task (`bounding box`, `class label`) that makes *N* predictions in a single pass. 3. DETR uses a *bipartite matching loss* during training to compare a fixed number of predictions with a fixed set of ground truth labels. If there are fewer ground truth labels in the set of *N* labels, then they're padded with a `no object` class. This loss function encourages DETR to find a one-to-one assignment between the predictions and ground truth labels. If either the bounding boxes or class labels aren't correct, a loss is incurred. Likewise, if DETR predicts an object that doesn't exist, it is penalized. This encourages DETR to find other objects in an image instead of focusing on one really prominent object. An object detection head is added on top of DETR to find the class label and the coordinates of the bounding box. There are two components to the object detection head: a linear layer to transform the decoder hidden states into logits over the class labels, and a MLP to predict the bounding box. Ready to try your hand at object detection? Check out our complete [object detection guide](tasks/object_detection) to learn how to finetune DETR and use it for inference! ### Image segmentation [Mask2Former](model_doc/mask2former) is a universal architecture for solving all types of image segmentation tasks. Traditional segmentation models are typically tailored towards a particular subtask of image segmentation, like instance, semantic or panoptic segmentation. Mask2Former frames each of those tasks as a *mask classification* problem. Mask classification groups pixels into *N* segments, and predicts *N* masks and their corresponding class label for a given image. We'll explain how Mask2Former works in this section, and then you can try finetuning SegFormer at the end. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/mask2former_architecture.png"/> </div> There are three main components to Mask2Former: 1. A [Swin](model_doc/swin) backbone accepts an image and creates a low-resolution image feature map from 3 consecutive 3x3 convolutions. 2. The feature map is passed to a *pixel decoder* which gradually upsamples the low-resolution features into high-resolution per-pixel embeddings. The pixel decoder actually generates multi-scale features (contains both low- and high-resolution features) with resolutions 1/32, 1/16, and 1/8th of the original image. 3. Each of these feature maps of differing scales is fed successively to one Transformer decoder layer at a time in order to capture small objects from the high-resolution features. The key to Mask2Former is the *masked attention* mechanism in the decoder. Unlike cross-attention which can attend to the entire image, masked attention only focuses on a certain area of the image. This is faster and leads to better performance because the local features of an image are enough for the model to learn from. 4. Like [DETR](tasks_explained#object-detection), Mask2Former also uses learned object queries and combines them with the image features from the pixel decoder to make a set prediction (`class label`, `mask prediction`). The decoder hidden states are passed into a linear layer and transformed into logits over the class labels. The cross-entropy loss is calculated between the logits and class label to find the most likely one. The mask predictions are generated by combining the pixel-embeddings with the final decoder hidden states. The sigmoid cross-entropy and dice loss is calculated between the logits and the ground truth mask to find the most likely mask. Ready to try your hand at object detection? Check out our complete [image segmentation guide](tasks/semantic_segmentation) to learn how to finetune SegFormer and use it for inference! ### Depth estimation [GLPN](model_doc/glpn), *Global-Local Path Network*, is a Transformer for depth estimation that combines a [SegFormer](model_doc/segformer) encoder with a lightweight decoder. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/glpn_architecture.jpg"/> </div> 1. Like ViT, an image is split into a sequence of patches, except these image patches are smaller. This is better for dense prediction tasks like segmentation or depth estimation. The image patches are transformed into patch embeddings (see the [image classification](#image-classification) section for more details about how patch embeddings are created), which are fed to the encoder. 2. The encoder accepts the patch embeddings, and passes them through several encoder blocks. Each block consists of attention and Mix-FFN layers. The purpose of the latter is to provide positional information. At the end of each encoder block is a *patch merging* layer for creating hierarchical representations. The features of each group of neighboring patches are concatenated, and a linear layer is applied to the concatenated features to reduce the number of patches to a resolution of 1/4. This becomes the input to the next encoder block, where this whole process is repeated until you have image features with resolutions of 1/8, 1/16, and 1/32. 3. A lightweight decoder takes the last feature map (1/32 scale) from the encoder and upsamples it to 1/16 scale. From here, the feature is passed into a *Selective Feature Fusion (SFF)* module, which selects and combines local and global features from an attention map for each feature and then upsamples it to 1/8th. This process is repeated until the decoded features are the same size as the original image. The output is passed through two convolution layers and then a sigmoid activation is applied to predict the depth of each pixel. ## Natural language processing The Transformer was initially designed for machine translation, and since then, it has practically become the default architecture for solving all NLP tasks. Some tasks lend themselves to the Transformer's encoder structure, while others are better suited for the decoder. Still, other tasks make use of both the Transformer's encoder-decoder structure. ### Text classification [BERT](model_doc/bert) is an encoder-only model and is the first model to effectively implement deep bidirectionality to learn richer representations of the text by attending to words on both sides. 1. BERT uses [WordPiece](tokenizer_summary#wordpiece) tokenization to generate a token embedding of the text. To tell the difference between a single sentence and a pair of sentences, a special `[SEP]` token is added to differentiate them. A special `[CLS]` token is added to the beginning of every sequence of text. The final output with the `[CLS]` token is used as the input to the classification head for classification tasks. BERT also adds a segment embedding to denote whether a token belongs to the first or second sentence in a pair of sentences. 2. BERT is pretrained with two objectives: masked language modeling and next-sentence prediction. In masked language modeling, some percentage of the input tokens are randomly masked, and the model needs to predict these. This solves the issue of bidirectionality, where the model could cheat and see all the words and "predict" the next word. The final hidden states of the predicted mask tokens are passed to a feedforward network with a softmax over the vocabulary to predict the masked word. The second pretraining object is next-sentence prediction. The model must predict whether sentence B follows sentence A. Half of the time sentence B is the next sentence, and the other half of the time, sentence B is a random sentence. The prediction, whether it is the next sentence or not, is passed to a feedforward network with a softmax over the two classes (`IsNext` and `NotNext`). 3. The input embeddings are passed through multiple encoder layers to output some final hidden states. To use the pretrained model for text classification, add a sequence classification head on top of the base BERT model. The sequence classification head is a linear layer that accepts the final hidden states and performs a linear transformation to convert them into logits. The cross-entropy loss is calculated between the logits and target to find the most likely label. Ready to try your hand at text classification? Check out our complete [text classification guide](tasks/sequence_classification) to learn how to finetune DistilBERT and use it for inference! ### Token classification To use BERT for token classification tasks like named entity recognition (NER), add a token classification head on top of the base BERT model. The token classification head is a linear layer that accepts the final hidden states and performs a linear transformation to convert them into logits. The cross-entropy loss is calculated between the logits and each token to find the most likely label. Ready to try your hand at token classification? Check out our complete [token classification guide](tasks/token_classification) to learn how to finetune DistilBERT and use it for inference! ### Question answering To use BERT for question answering, add a span classification head on top of the base BERT model. This linear layer accepts the final hidden states and performs a linear transformation to compute the `span` start and end logits corresponding to the answer. The cross-entropy loss is calculated between the logits and the label position to find the most likely span of text corresponding to the answer. Ready to try your hand at question answering? Check out our complete [question answering guide](tasks/question_answering) to learn how to finetune DistilBERT and use it for inference! <Tip> 💡 Notice how easy it is to use BERT for different tasks once it's been pretrained. You only need to add a specific head to the pretrained model to manipulate the hidden states into your desired output! </Tip> ### Text generation [GPT-2](model_doc/gpt2) is a decoder-only model pretrained on a large amount of text. It can generate convincing (though not always true!) text given a prompt and complete other NLP tasks like question answering despite not being explicitly trained to. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gpt2_architecture.png"/> </div> 1. GPT-2 uses [byte pair encoding (BPE)](tokenizer_summary#bytepair-encoding-bpe) to tokenize words and generate a token embedding. Positional encodings are added to the token embeddings to indicate the position of each token in the sequence. The input embeddings are passed through multiple decoder blocks to output some final hidden state. Within each decoder block, GPT-2 uses a *masked self-attention* layer which means GPT-2 can't attend to future tokens. It is only allowed to attend to tokens on the left. This is different from BERT's [`mask`] token because, in masked self-attention, an attention mask is used to set the score to `0` for future tokens. 2. The output from the decoder is passed to a language modeling head, which performs a linear transformation to convert the hidden states into logits. The label is the next token in the sequence, which are created by shifting the logits to the right by one. The cross-entropy loss is calculated between the shifted logits and the labels to output the next most likely token. GPT-2's pretraining objective is based entirely on [causal language modeling](glossary#causal-language-modeling), predicting the next word in a sequence. This makes GPT-2 especially good at tasks that involve generating text. Ready to try your hand at text generation? Check out our complete [causal language modeling guide](tasks/language_modeling#causal-language-modeling) to learn how to finetune DistilGPT-2 and use it for inference! <Tip> For more information about text generation, check out the [text generation strategies](generation_strategies) guide! </Tip> ### Summarization Encoder-decoder models like [BART](model_doc/bart) and [T5](model_doc/t5) are designed for the sequence-to-sequence pattern of a summarization task. We'll explain how BART works in this section, and then you can try finetuning T5 at the end. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bart_architecture.png"/> </div> 1. BART's encoder architecture is very similar to BERT and accepts a token and positional embedding of the text. BART is pretrained by corrupting the input and then reconstructing it with the decoder. Unlike other encoders with specific corruption strategies, BART can apply any type of corruption. The *text infilling* corruption strategy works the best though. In text infilling, a number of text spans are replaced with a **single** [`mask`] token. This is important because the model has to predict the masked tokens, and it teaches the model to predict the number of missing tokens. The input embeddings and masked spans are passed through the encoder to output some final hidden states, but unlike BERT, BART doesn't add a final feedforward network at the end to predict a word. 2. The encoder's output is passed to the decoder, which must predict the masked tokens and any uncorrupted tokens from the encoder's output. This gives additional context to help the decoder restore the original text. The output from the decoder is passed to a language modeling head, which performs a linear transformation to convert the hidden states into logits. The cross-entropy loss is calculated between the logits and the label, which is just the token shifted to the right. Ready to try your hand at summarization? Check out our complete [summarization guide](tasks/summarization) to learn how to finetune T5 and use it for inference! <Tip> For more information about text generation, check out the [text generation strategies](generation_strategies) guide! </Tip> ### Translation Translation is another example of a sequence-to-sequence task, which means you can use an encoder-decoder model like [BART](model_doc/bart) or [T5](model_doc/t5) to do it. We'll explain how BART works in this section, and then you can try finetuning T5 at the end. BART adapts to translation by adding a separate randomly initialized encoder to map a source language to an input that can be decoded into the target language. This new encoder's embeddings are passed to the pretrained encoder instead of the original word embeddings. The source encoder is trained by updating the source encoder, positional embeddings, and input embeddings with the cross-entropy loss from the model output. The model parameters are frozen in this first step, and all the model parameters are trained together in the second step. BART has since been followed up by a multilingual version, mBART, intended for translation and pretrained on many different languages. Ready to try your hand at translation? Check out our complete [translation guide](tasks/summarization) to learn how to finetune T5 and use it for inference! <Tip> For more information about text generation, check out the [text generation strategies](generation_strategies) guide! </Tip>
huggingface/transformers/blob/main/docs/source/en/tasks_explained.md
-- title: "Mixture of Experts Explained" thumbnail: /blog/assets/moe/thumbnail.png authors: - user: osanseviero - user: lewtun - user: philschmid - user: smangrul - user: ybelkada - user: pcuenq --- # Mixture of Experts Explained With the release of Mixtral 8x7B ([announcement](https://mistral.ai/news/mixtral-of-experts/), [model card](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1)), a class of transformer has become the hottest topic in the open AI community: Mixture of Experts, or MoEs for short. In this blog post, we take a look at the building blocks of MoEs, how they’re trained, and the tradeoffs to consider when serving them for inference. Let’s dive in! ## Table of Contents - [What is a Mixture of Experts?](#what-is-a-mixture-of-experts-moe) - [A Brief History of MoEs](#a-brief-history-of-moes) - [What is Sparsity?](#what-is-sparsity) - [Load Balancing tokens for MoEs](#load-balancing-tokens-for-moes) - [MoEs and Transformers](#moes-and-transformers) - [Switch Transformers](#switch-transformers) - [Stabilizing training with router Z-loss](#stabilizing-training-with-router-z-loss) - [What does an expert learn?](#what-does-an-expert-learn) - [How does scaling the number of experts impact pretraining?](#how-does-scaling-the-number-of-experts-impact-pretraining) - [Fine-tuning MoEs](#fine-tuning-moes) - [When to use sparse MoEs vs dense models?](#when-to-use-sparse-moes-vs-dense-models) - [Making MoEs go brrr](#making-moes-go-brrr) - [Expert Parallelism](#parallelism) - [Capacity Factor and Communication costs](#capacity-factor-and-communication-costs) - [Serving Techniques](#serving-techniques) - [Efficient Training](#more-on-efficient-training) - [Open Source MoEs](#open-source-moes) - [Exciting directions of work](#exciting-directions-of-work) - [Some resources](#some-resources) ## TL;DR MoEs: - Are **pretrained much faster** vs. dense models - Have **faster inference** compared to a model with the same number of parameters - Require **high VRAM** as all experts are loaded in memory - Face many **challenges in fine-tuning**, but [recent work](https://arxiv.org/pdf/2305.14705.pdf) with MoE **instruction-tuning is promising** Let’s dive in! ## What is a Mixture of Experts (MoE)? The scale of a model is one of the most important axes for better model quality. Given a fixed computing budget, training a larger model for fewer steps is better than training a smaller model for more steps. Mixture of Experts enable models to be pretrained with far less compute, which means you can dramatically scale up the model or dataset size with the same compute budget as a dense model. In particular, a MoE model should achieve the same quality as its dense counterpart much faster during pretraining. So, what exactly is a MoE? In the context of transformer models, a MoE consists of two main elements: - **Sparse MoE layers** are used instead of dense feed-forward network (FFN) layers. MoE layers have a certain number of “experts” (e.g. 8), where each expert is a neural network. In practice, the experts are FFNs, but they can also be more complex networks or even a MoE itself, leading to hierarchical MoEs! - A **gate network or router**, that determines which tokens are sent to which expert. For example, in the image below, the token “More” is sent to the second expert, and the token "Parameters” is sent to the first network. As we’ll explore later, we can send a token to more than one expert. How to route a token to an expert is one of the big decisions when working with MoEs - the router is composed of learned parameters and is pretrained at the same time as the rest of the network. <figure class="image text-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/moe/00_switch_transformer.png" alt="Switch Layer"> <figcaption>MoE layer from the [Switch Transformers paper](https://arxiv.org/abs/2101.03961)</figcaption> </figure> So, to recap, in MoEs we replace every FFN layer of the transformer model with an MoE layer, which is composed of a gate network and a certain number of experts. Although MoEs provide benefits like efficient pretraining and faster inference compared to dense models, they also come with challenges: - **Training:** MoEs enable significantly more compute-efficient pretraining, but they’ve historically struggled to generalize during fine-tuning, leading to overfitting. - **Inference:** Although a MoE might have many parameters, only some of them are used during inference. This leads to much faster inference compared to a dense model with the same number of parameters. However, all parameters need to be loaded in RAM, so memory requirements are high. For example, given a MoE like Mixtral 8x7B, we’ll need to have enough VRAM to hold a dense 47B parameter model. Why 47B parameters and not 8 x 7B = 56B? That’s because in MoE models, only the FFN layers are treated as individual experts, and the rest of the model parameters are shared. At the same time, assuming just two experts are being used per token, the inference speed (FLOPs) is like using a 12B model (as opposed to a 14B model), because it computes 2x7B matrix multiplications, but with some layers shared (more on this soon). Now that we have a rough idea of what a MoE is, let’s take a look at the research developments that led to their invention. ## A Brief History of MoEs The roots of MoEs come from the 1991 paper [Adaptive Mixture of Local Experts](https://www.cs.toronto.edu/~hinton/absps/jjnh91.pdf). The idea, akin to ensemble methods, was to have a supervised procedure for a system composed of separate networks, each handling a different subset of the training cases. Each separate network, or expert, specializes in a different region of the input space. How is the expert chosen? A gating network determines the weights for each expert. During training, both the expert and the gating are trained. Between 2010-2015, two different research areas contributed to later MoE advancement: - **Experts as components**: In the traditional MoE setup, the whole system comprises a gating network and multiple experts. MoEs as the whole model have been explored in SVMs, Gaussian Processes, and other methods. The work by [Eigen, Ranzato, and Ilya](https://arxiv.org/abs/1312.4314) explored MoEs as components of deeper networks. This allows having MoEs as layers in a multilayer network, making it possible for the model to be both large and efficient simultaneously. - **Conditional Computation**: Traditional networks process all input data through every layer. In this period, Yoshua Bengio researched approaches to dynamically activate or deactivate components based on the input token. These works led to exploring a mixture of experts in the context of NLP. Concretely, [Shazeer et al.](https://arxiv.org/abs/1701.06538) (2017, with “et al.” including Geoffrey Hinton and Jeff Dean, [Google’s Chuck Norris](https://www.informatika.bg/jeffdean)) scaled this idea to a 137B LSTM (the de-facto NLP architecture back then, created by Schmidhuber) by introducing sparsity, allowing to keep very fast inference even at high scale. This work focused on translation but faced many challenges, such as high communication costs and training instabilities. <figure class="image text-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/moe/01_moe_layer.png" alt="MoE layer in LSTM"> <figcaption>MoE layer from the Outrageously Large Neural Network paper</figcaption> </figure> MoEs have allowed training multi-trillion parameter models, such as the open-sourced 1.6T parameters Switch Transformers, among others. MoEs have also been explored in Computer Vision, but this blog post will focus on the NLP domain. ## What is Sparsity? Sparsity uses the idea of conditional computation. While in dense models all the parameters are used for all the inputs, sparsity allows us to only run some parts of the whole system. Let’s dive deeper into Shazeer's exploration of MoEs for translation. The idea of conditional computation (parts of the network are active on a per-example basis) allows one to scale the size of the model without increasing the computation, and hence, this led to thousands of experts being used in each MoE layer. This setup introduces some challenges. For example, although large batch sizes are usually better for performance, batch sizes in MOEs are effectively reduced as data flows through the active experts. For example, if our batched input consists of 10 tokens, **five tokens might end in one expert, and the other five tokens might end in five different experts, leading to uneven batch sizes and underutilization**. The [Making MoEs go brrr](#making-moes-go-brrr) section below will discuss other challenges and solutions. How can we solve this? A learned gating network (G) decides which experts (E) to send a part of the input: $$ y = \sum_{i=1}^{n} G(x)_i E_i(x) $$ In this setup, all experts are run for all inputs - it’s a weighted multiplication. But, what happens if G is 0? If that’s the case, there’s no need to compute the respective expert operations and hence we save compute. What’s a typical gating function? In the most traditional setup, we just use a simple network with a softmax function. The network will learn which expert to send the input. $$ G_\sigma(x) = \text{Softmax}(x \cdot W_g) $$ Shazeer’s work also explored other gating mechanisms, such as Noisy Top-k Gating. This gating approach introduces some (tunable) noise and then keeps the top k values. That is: 1. We add some noise $$ H(x)_i = (x \cdot W_{\text{g}})_i + \text{StandardNormal()} \cdot \text{Softplus}((x \cdot W_{\text{noise}})_i) $$ 2. We only pick the top k $$ \text{KeepTopK}(v, k)_i = \begin{cases} v_i & \text{if } v_i \text{ is in the top } k \text{ elements of } v, \\ -\infty & \text{otherwise.} \end{cases} $$ 3. We apply the softmax. $$ G(x) = \text{Softmax}(\text{KeepTopK}(H(x), k)) $$ This sparsity introduces some interesting properties. By using a low enough k (e.g. one or two), we can train and run inference much faster than if many experts were activated. Why not just select the top expert? The initial conjecture was that routing to more than one expert was needed to have the gate learn how to route to different experts, so at least two experts had to be picked. The [Switch Transformers](#switch-transformers) section revisits this decision. Why do we add noise? That’s for load balancing! ## Load balancing tokens for MoEs As discussed before, if all our tokens are sent to just a few popular experts, that will make training inefficient. In a normal MoE training, the gating network converges to mostly activate the same few experts. This self-reinforces as favored experts are trained quicker and hence selected more. To mitigate this, an **auxiliary loss** is added to encourage giving all experts equal importance. This loss ensures that all experts receive a roughly equal number of training examples. The following sections will also explore the concept of expert capacity, which introduces a threshold of how many tokens can be processed by an expert. In `transformers`, the auxiliary loss is exposed via the `aux_loss` parameter. ## MoEs and Transformers Transformers are a very clear case that scaling up the number of parameters improves the performance, so it’s not surprising that Google explored this with [GShard](https://arxiv.org/abs/2006.16668), which explores scaling up transformers beyond 600 billion parameters. GShard replaces every other FFN layer with an MoE layer using top-2 gating in both the encoder and the decoder. The next image shows how this looks like for the encoder part. This setup is quite beneficial for large-scale computing: when we scale to multiple devices, the MoE layer is shared across devices while all the other layers are replicated. This is further discussed in the [“Making MoEs go brrr”](#making-moes-go-brrr) section. <figure class="image text-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/moe/02_moe_block.png" alt="MoE Transformer Encoder"> <figcaption>MoE Transformer Encoder from the GShard Paper</figcaption> </figure> To maintain a balanced load and efficiency at scale, the GShard authors introduced a couple of changes in addition to an auxiliary loss similar to the one discussed in the previous section: - **Random routing**: in a top-2 setup, we always pick the top expert, but the second expert is picked with probability proportional to its weight. - **Expert capacity**: we can set a threshold of how many tokens can be processed by one expert. If both experts are at capacity, the token is considered overflowed, and it’s sent to the next layer via residual connections (or dropped entirely in other projects). This concept will become one of the most important concepts for MoEs. Why is expert capacity needed? Since all tensor shapes are statically determined at compilation time, but we cannot know how many tokens will go to each expert ahead of time, we need to fix the capacity factor. The GShard paper has contributions by expressing parallel computation patterns that work well for MoEs, but discussing that is outside the scope of this blog post. **Note:** when we run inference, only some experts will be triggered. At the same time, there are shared computations, such as self-attention, which is applied for all tokens. That’s why when we talk of a 47B model of 8 experts, we can run with the compute of a 12B dense model. If we use top-2, 14B parameters would be used. But given that the attention operations are shared (among others), the actual number of used parameters is 12B. ## Switch Transformers Although MoEs showed a lot of promise, they struggle with training and fine-tuning instabilities. [Switch Transformers](https://arxiv.org/abs/2101.03961) is a very exciting work that deep dives into these topics. The authors even released a [1.6 trillion parameters MoE on Hugging Face](https://huggingface.co/google/switch-c-2048) with 2048 experts, which you can run with transformers. Switch Transformers achieved a 4x pre-train speed-up over T5-XXL. <figure class="image text-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/moe/03_switch_layer.png" alt="Switch Transformer Layer"> <figcaption>Switch Transformer Layer of the Switch Transformer paper</figcaption> </figure> Just as in GShard, the authors replaced the FFN layers with a MoE layer. The Switch Transformers paper proposes a Switch Transformer layer that receives two inputs (two different tokens) and has four experts. Contrary to the initial idea of using at least two experts, Switch Transformers uses a simplified single-expert strategy. The effects of this approach are: - The router computation is reduced - The batch size of each expert can be at least halved - Communication costs are reduced - Quality is preserved Switch Transformers also explores the concept of expert capacity. $$ \text{Expert Capacity} = \left(\frac{\text{tokens per batch}}{\text{number of experts}}\right) \times \text{capacity factor} $$ The capacity suggested above evenly divides the number of tokens in the batch across the number of experts. If we use a capacity factor greater than 1, we provide a buffer for when tokens are not perfectly balanced. Increasing the capacity will lead to more expensive inter-device communication, so it’s a trade-off to keep in mind. In particular, Switch Transformers perform well at low capacity factors (1-1.25) Switch Transformer authors also revisit and simplify the load balancing loss mentioned in the sections. For each Switch layer, the auxiliary loss is added to the total model loss during training. This loss encourages uniform routing and can be weighted using a hyperparameter. The authors also experiment with selective precision, such as training the experts with `bfloat16` while using full precision for the rest of the computations. Lower precision reduces communication costs between processors, computation costs, and memory for storing tensors. The initial experiments, in which both the experts and the gate networks were trained in `bfloat16`, yielded more unstable training. This was, in particular, due to the router computation: as the router has an exponentiation function, having higher precision is important. To mitigate the instabilities, full precision was used for the routing as well. <figure class="image text-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/moe/04_switch_table.png" alt="Table shows that selective precision does not degrade quality."> <figcaption>Using selective precision does not degrade quality and enables faster models</figcaption> </figure> This [notebook](https://colab.research.google.com/drive/1aGGVHZmtKmcNBbAwa9hbu58DDpIuB5O4?usp=sharing) showcases fine-tuning Switch Transformers for summarization, but we suggest first reviewing the [fine-tuning section](#fine-tuning-moes). Switch Transformers uses an encoder-decoder setup in which they did a MoE counterpart of T5. The [GLaM](https://arxiv.org/abs/2112.06905) paper explores pushing up the scale of these models by training a model matching GPT-3 quality using 1/3 of the energy (yes, thanks to the lower amount of computing needed to train a MoE, they can reduce the carbon footprint by up to an order of magnitude). The authors focused on decoder-only models and few-shot and one-shot evaluation rather than fine-tuning. They used Top-2 routing and much larger capacity factors. In addition, they explored the capacity factor as a metric one can change during training and evaluation depending on how much computing one wants to use. ## Stabilizing training with router Z-loss The balancing loss previously discussed can lead to instability issues. We can use many methods to stabilize sparse models at the expense of quality. For example, introducing dropout improves stability but leads to loss of model quality. On the other hand, adding more multiplicative components improves quality but decreases stability. Router z-loss, introduced in [ST-MoE](https://arxiv.org/abs/2202.08906), significantly improves training stability without quality degradation by penalizing large logits entering the gating network. Since this loss encourages absolute magnitude of values to be smaller, roundoff errors are reduced, which can be quite impactful for exponential functions such as the gating. We recommend reviewing the paper for details. ## What does an expert learn? The ST-MoE authors observed that encoder experts specialize in a group of tokens or shallow concepts. For example, we might end with a punctuation expert, a proper noun expert, etc. On the other hand, the decoder experts have less specialization. The authors also trained in a multilingual setup. Although one could imagine each expert specializing in a language, the opposite happens: due to token routing and load balancing, there is no single expert specialized in any given language. <figure class="image text-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/moe/05_experts_learning.png" alt="Experts specialize in some token groups"> <figcaption>Table from the ST-MoE paper showing which token groups were sent to which expert.</figcaption> </figure> ## How does scaling the number of experts impact pretraining? More experts lead to improved sample efficiency and faster speedup, but these are diminishing gains (especially after 256 or 512), and more VRAM will be needed for inference. The properties studied in Switch Transformers at large scale were consistent at small scale, even with 2, 4, or 8 experts per layer. ## Fine-tuning MoEs > Mixtral is supported with version 4.36.0 of transformers. You can install it with `pip install "transformers==4.36.0 --upgrade` The overfitting dynamics are very different between dense and sparse models. Sparse models are more prone to overfitting, so we can explore higher regularization (e.g. dropout) within the experts themselves (e.g. we can have one dropout rate for the dense layers and another, higher, dropout for the sparse layers). One question is whether to use the auxiliary loss for fine-tuning. The ST-MoE authors experimented with turning off the auxiliary loss, and the quality was not significantly impacted, even when up to 11% of the tokens were dropped. Token dropping might be a form of regularization that helps prevent overfitting. Switch Transformers observed that at a fixed pretrain perplexity, the sparse model does worse than the dense counterpart in downstream tasks, especially on reasoning-heavy tasks such as SuperGLUE. On the other hand, for knowledge-heavy tasks such as TriviaQA, the sparse model performs disproportionately well. The authors also observed that a fewer number of experts helped at fine-tuning. Another observation that confirmed the generalization issue is that the model did worse in smaller tasks but did well in larger tasks. <figure class="image text-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/moe/06_superglue_curves.png" alt="Fine-tuning learning curves"> <figcaption>In the small task (left), we can see clear overfitting as the sparse model does much worse in the validation set. In the larger task (right), the MoE performs well. This image is from the ST-MoE paper.</figcaption> </figure> One could experiment with freezing all non-expert weights. That is, we'll only update the MoE layers. This leads to a huge performance drop. We could try the opposite: freezing only the parameters in MoE layers, which worked almost as well as updating all parameters. This can help speed up and reduce memory for fine-tuning. This can be somewhat counter-intuitive as 80% of the parameters are in the MoE layers (in the ST-MoE project). Their hypothesis for that architecture is that, as expert layers only occur every 1/4 layers, and each token sees at most two experts per layer, updating the MoE parameters affects much fewer layers than updating other parameters. <figure class="image text-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/moe/07_superglue_bars.png" alt="Only updating the non MoE layers works well in fine-tuning"> <figcaption>By only freezing the MoE layers, we can speed up the training while preserving the quality. This image is from the ST-MoE paper.</figcaption> </figure> One last part to consider when fine-tuning sparse MoEs is that they have different fine-tuning hyperparameter setups - e.g., sparse models tend to benefit more from smaller batch sizes and higher learning rates. <figure class="image text-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/moe/08_superglue_dense_vs_sparse.png" alt="Table comparing fine-tuning batch size and learning rate between dense and sparse models."> <figcaption>Sparse models fine-tuned quality improves with higher learning rates and smaller batch sizes. This image is from the ST-MoE paper.</figcaption> </figure> At this point, you might be a bit sad that people have struggled to fine-tune MoEs. Excitingly, a recent paper, [MoEs Meets Instruction Tuning](https://arxiv.org/pdf/2305.14705.pdf) (July 2023), performs experiments doing: - Single task fine-tuning - Multi-task instruction-tuning - Multi-task instruction-tuning followed by single-task fine-tuning When the authors fine-tuned the MoE and the T5 equivalent, the T5 equivalent was better. When the authors fine-tuned the Flan T5 (T5 instruct equivalent) MoE, the MoE performed significantly better. Not only this, the improvement of the Flan-MoE over the MoE was larger than Flan T5 over T5, indicating that MoEs might benefit much more from instruction tuning than dense models. MoEs benefit more from a higher number of tasks. Unlike the previous discussion suggesting to turn off the auxiliary loss function, the loss actually prevents overfitting. <figure class="image text-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/moe/09_fine_tune_evals.png" alt="MoEs benefit even more from instruct tuning than dense models"> <figcaption>Sparse models benefit more from instruct-tuning compared to dense models. This image is from the MoEs Meets Instruction Tuning paper</figcaption> </figure> ## When to use sparse MoEs vs dense models? Experts are useful for high throughput scenarios with many machines. Given a fixed compute budget for pretraining, a sparse model will be more optimal. For low throughput scenarios with little VRAM, a dense model will be better. **Note:** one cannot directly compare the number of parameters between sparse and dense models, as both represent significantly different things. ## Making MoEs go brrr The initial MoE work presented MoE layers as a branching setup, leading to slow computation as GPUs are not designed for it and leading to network bandwidth becoming a bottleneck as the devices need to send info to others. This section will discuss some existing work to make pretraining and inference with these models more practical. MoEs go brrrrr. ### Parallelism Let’s do a brief review of parallelism: - **Data parallelism:** the same weights are replicated across all cores, and the data is partitioned across cores. - **Model parallelism:** the model is partitioned across cores, and the data is replicated across cores. - **Model and data parallelism:** we can partition the model and the data across cores. Note that different cores process different batches of data. - **Expert parallelism**: experts are placed on different workers. If combined with data parallelism, each core has a different expert and the data is partitioned across all cores With expert parallelism, experts are placed on different workers, and each worker takes a different batch of training samples. For non-MoE layers, expert parallelism behaves the same as data parallelism. For MoE layers, tokens in the sequence are sent to workers where the desired experts reside. <figure class="image text-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/moe/10_parallelism.png" alt="Image illustrating model, expert, and data prallelism"> <figcaption>Illustration from the Switch Transformers paper showing how data and models are split over cores with different parallelism techniques.</figcaption> </figure> ### Capacity Factor and communication costs Increasing the capacity factor (CF) increases the quality but increases communication costs and memory of activations. If all-to-all communications are slow, using a smaller capacity factor is better. A good starting point is using top-2 routing with 1.25 capacity factor and having one expert per core. During evaluation, the capacity factor can be changed to reduce compute. ### Serving techniques > You can deploy [mistralai/Mixtral-8x7B-Instruct-v0.1](https://ui.endpoints.huggingface.co/new?repository=mistralai%2FMixtral-8x7B-Instruct-v0.1&vendor=aws&region=us-east-1&accelerator=gpu&instance_size=2xlarge&task=text-generation&no_suggested_compute=true&tgi=true&tgi_max_batch_total_tokens=1024000&tgi_max_total_tokens=32000) to Inference Endpoints. A big downside of MoEs is the large number of parameters. For local use cases, one might want to use a smaller model. Let's quickly discuss a few techniques that can help with serving: * The Switch Transformers authors did early distillation experiments. By distilling a MoE back to its dense counterpart, they could keep 30-40% of the sparsity gains. Distillation, hence, provides the benefits of faster pretaining and using a smaller model in production. * Recent approaches modify the routing to route full sentences or tasks to an expert, permitting extracting sub-networks for serving. * Aggregation of Experts (MoE): this technique merges the weights of the experts, hence reducing the number of parameters at inference time. ### More on efficient training FasterMoE (March 2022) analyzes the performance of MoEs in highly efficient distributed systems and analyzes the theoretical limit of different parallelism strategies, as well as techniques to skew expert popularity, fine-grained schedules of communication that reduce latency, and an adjusted topology-aware gate that picks experts based on the lowest latency, leading to a 17x speedup. Megablocks (Nov 2022) explores efficient sparse pretraining by providing new GPU kernels that can handle the dynamism present in MoEs. Their proposal never drops tokens and maps efficiently to modern hardware, leading to significant speedups. What’s the trick? Traditional MoEs use batched matrix multiplication, which assumes all experts have the same shape and the same number of tokens. In contrast, Megablocks expresses MoE layers as block-sparse operations that can accommodate imbalanced assignment. <figure class="image text-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/moe/11_expert_matmuls.png" alt="Matrix multiplication optimized for block-sparse operations."> <figcaption>Block-sparse matrix multiplication for differently sized experts and number of tokens (from [MegaBlocks](https://arxiv.org/abs/2211.15841)).</figcaption> </figure> ## Open Source MoEs There are nowadays several open source projects to train MoEs: - Megablocks: https://github.com/stanford-futuredata/megablocks - Fairseq: https://github.com/facebookresearch/fairseq/tree/main/examples/moe_lm - OpenMoE: https://github.com/XueFuzhao/OpenMoE In the realm of released open access MoEs, you can check: - [Switch Transformers (Google)](https://huggingface.co/collections/google/switch-transformers-release-6548c35c6507968374b56d1f): Collection of T5-based MoEs going from 8 to 2048 experts. The largest model has 1.6 trillion parameters. - [NLLB MoE (Meta)](https://huggingface.co/facebook/nllb-moe-54b): A MoE variant of the NLLB translation model. - [OpenMoE](https://huggingface.co/fuzhao): A community effort that has released Llama-based MoEs. - [Mixtral 8x7B (Mistral)](https://huggingface.co/mistralai): A high-quality MoE that outperforms Llama 2 70B and has much faster inference. A instruct-tuned model is also released. Read more about it in [the announcement blog post](https://mistral.ai/news/mixtral-of-experts/). ## Exciting directions of work Further experiments on **distilling** a sparse MoE back to a dense model with less parameters but similar number of parameters. Another area will be quantization of MoEs. [QMoE](https://arxiv.org/abs/2310.16795) (Oct. 2023) is a good step in this direction by quantizing the MoEs to less than 1 bit per parameter, hence compressing the 1.6T Switch Transformer which uses 3.2TB accelerator to just 160GB. So, TL;DR, some interesting areas to explore: * Distilling Mixtral into a dense model * Explore model merging techniques of the experts and their impact in inference time * Perform extreme quantization techniques of Mixtral ## Some resources - [Adaptive Mixture of Local Experts (1991)](https://www.cs.toronto.edu/~hinton/absps/jjnh91.pdf) - [Learning Factored Representations in a Deep Mixture of Experts (2013)](https://arxiv.org/abs/1312.4314) - [Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer (2017)](https://arxiv.org/abs/1701.06538) - [GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding (Jun 2020)](https://arxiv.org/abs/2006.16668) - [GLaM: Efficient Scaling of Language Models with Mixture-of-Experts (Dec 2021)](https://arxiv.org/abs/2112.06905) - [Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity (Jan 2022)](https://arxiv.org/abs/2101.03961) - [ST-MoE: Designing Stable and Transferable Sparse Expert Models (Feb 2022)](https://arxiv.org/abs/2202.08906) - [FasterMoE: modeling and optimizing training of large-scale dynamic pre-trained models(April 2022)](https://dl.acm.org/doi/10.1145/3503221.3508418) - [MegaBlocks: Efficient Sparse Training with Mixture-of-Experts (Nov 2022)](https://arxiv.org/abs/2211.15841) - [Mixture-of-Experts Meets Instruction Tuning:A Winning Combination for Large Language Models (May 2023)](https://arxiv.org/abs/2305.14705) - [Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1), [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1). ## Citation ```bibtex @misc {sanseviero2023moe, author = { Omar Sanseviero and Lewis Tunstall and Philipp Schmid and Sourab Mangrulkar and Younes Belkada and Pedro Cuenca }, title = { Mixture of Experts Explained }, year = 2023, url = { https://huggingface.co/blog/moe }, publisher = { Hugging Face Blog } } ``` ``` Sanseviero, et al., "Mixture of Experts Explained", Hugging Face Blog, 2023. ```
huggingface/blog/blob/main/moe.md
-- title: "An Introduction to Q-Learning Part 1" thumbnail: /blog/assets/70_deep_rl_q_part1/thumbnail.gif authors: - user: ThomasSimonini --- # An Introduction to Q-Learning Part 1 <h2>Unit 2, part 1 of the <a href="https://github.com/huggingface/deep-rl-class">Deep Reinforcement Learning Class with Hugging Face 🤗</a></h2> ⚠️ A **new updated version of this article is available here** 👉 [https://huggingface.co/deep-rl-course/unit1/introduction](https://huggingface.co/deep-rl-course/unit2/introduction) *This article is part of the Deep Reinforcement Learning Class. A free course from beginner to expert. Check the syllabus [here.](https://huggingface.co/deep-rl-course/unit0/introduction)* <img src="assets/70_deep_rl_q_part1/thumbnail.gif" alt="Thumbnail"/> --- ⚠️ A **new updated version of this article is available here** 👉 [https://huggingface.co/deep-rl-course/unit1/introduction](https://huggingface.co/deep-rl-course/unit2/introduction) *This article is part of the Deep Reinforcement Learning Class. A free course from beginner to expert. Check the syllabus [here.](https://huggingface.co/deep-rl-course/unit0/introduction)* In the [first chapter of this class](https://huggingface.co/blog/deep-rl-intro), we learned about Reinforcement Learning (RL), the RL process, and the different methods to solve an RL problem. We also trained our first lander agent to **land correctly on the Moon 🌕 and uploaded it to the Hugging Face Hub.** So today, we're going to **dive deeper into one of the Reinforcement Learning methods: value-based methods** and study our first RL algorithm: **Q-Learning.** We'll also **implement our first RL agent from scratch**: a Q-Learning agent and will train it in two environments: 1. Frozen-Lake-v1 (non-slippery version): where our agent will need to **go from the starting state (S) to the goal state (G)** by walking only on frozen tiles (F) and avoiding holes (H). 2. An autonomous taxi will need **to learn to navigate** a city to **transport its passengers from point A to point B.** <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/envs.gif" alt="Environments"/> </figure> This unit is divided into 2 parts: <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/two_parts.jpg" alt="Two Parts"/> </figure> In the first part, we'll **learn about the value-based methods and the difference between Monte Carlo and Temporal Difference Learning.** And in the second part, **we'll study our first RL algorithm: Q-Learning, and implement our first RL Agent.** This unit is fundamental **if you want to be able to work on Deep Q-Learning** (unit 3): the first Deep RL algorithm that was able to play Atari games and **beat the human level on some of them** (breakout, space invaders…). So let's get started! - [What is RL? A short recap](#what-is-rl-a-short-recap) - [The two types of value-based methods](#the-two-types-of-value-based-methods) - [The State-Value function](#the-state-value-function) - [The Action-Value function](#the-action-value-function) - [The Bellman Equation: simplify our value estimation](#the-bellman-equation-simplify-our-value-estimation) - [Monte Carlo vs Temporal Difference Learning](#monte-carlo-vs-temporal-difference-learning) - [Monte Carlo: learning at the end of the episode](#monte-carlo-learning-at-the-end-of-the-episode) - [Temporal Difference Learning: learning at each step](#temporal-difference-learning-learning-at-each-step) ## **What is RL? A short recap** In RL, we build an agent that can **make smart decisions**. For instance, an agent that **learns to play a video game.** Or a trading agent that **learns to maximize its benefits** by making smart decisions on **what stocks to buy and when to sell.** <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/rl-process.jpg" alt="RL process"/> </figure> But, to make intelligent decisions, our agent will learn from the environment by **interacting with it through trial and error** and receiving rewards (positive or negative) **as unique feedback.** Its goal **is to maximize its expected cumulative reward** (because of the reward hypothesis). **The agent's decision-making process is called the policy π:** given a state, a policy will output an action or a probability distribution over actions. That is, given an observation of the environment, a policy will provide an action (or multiple probabilities for each action) that the agent should take. <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/policy.jpg" alt="Policy"/> </figure> **Our goal is to find an optimal policy π***, aka., a policy that leads to the best expected cumulative reward. And to find this optimal policy (hence solving the RL problem), there **are two main types of RL methods**: - *Policy-based methods*: **Train the policy directly** to learn which action to take given a state. - *Value-based methods*: **Train a value function** to learn **which state is more valuable** and use this value function **to take the action that leads to it.** <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/two-approaches.jpg" alt="Two RL approaches"/> </figure> And in this chapter, **we'll dive deeper into the Value-based methods.** ## **The two types of value-based methods** In value-based methods, **we learn a value function** that **maps a state to the expected value of being at that state.** <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/vbm-1.jpg" alt="Value Based Methods"/> </figure> The value of a state is the **expected discounted return** the agent can get if it **starts at that state and then acts according to our policy.** If you forgot what discounting is, you [can read this section](https://huggingface.co/blog/deep-rl-intro#rewards-and-the-discounting). > But what does it mean to act according to our policy? After all, we don't have a policy in value-based methods, since we train a value function and not a policy. > Remember that the goal of an **RL agent is to have an optimal policy π.** To find it, we learned that there are two different methods: - *Policy-based methods:* **Directly train the policy** to select what action to take given a state (or a probability distribution over actions at that state). In this case, we **don't have a value function.** <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/two-approaches-2.jpg" alt="Two RL approaches"/> </figure> The policy takes a state as input and outputs what action to take at that state (deterministic policy). And consequently, **we don't define by hand the behavior of our policy; it's the training that will define it.** - *Value-based methods:* **Indirectly, by training a value function** that outputs the value of a state or a state-action pair. Given this value function, our policy **will take action.** But, because we didn't train our policy, **we need to specify its behavior.** For instance, if we want a policy that, given the value function, will take actions that always lead to the biggest reward, **we'll create a Greedy Policy.** <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/two-approaches-3.jpg" alt="Two RL approaches"/> <figcaption>Given a state, our action-value function (that we train) outputs the value of each action at that state, then our greedy policy (that we defined) selects the action with the biggest state-action pair value.</figcaption> </figure> Consequently, whatever method you use to solve your problem, **you will have a policy**, but in the case of value-based methods you don't train it, your policy **is just a simple function that you specify** (for instance greedy policy) and this policy **uses the values given by the value-function to select its actions.** So the difference is: - In policy-based, **the optimal policy is found by training the policy directly.** - In value-based, **finding an optimal value function leads to having an optimal policy.** <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/link-value-policy.jpg" alt="Link between value and policy"/> </figure> In fact, most of the time, in value-based methods, you'll use **an Epsilon-Greedy Policy** that handles the exploration/exploitation trade-off; we'll talk about it when we talk about Q-Learning in the second part of this unit. So, we have two types of value-based functions: ### **The State-Value function** We write the state value function under a policy π like this: <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/state-value-function-1.jpg" alt="State value function"/> </figure> For each state, the state-value function outputs the expected return if the agent **starts at that state,** and then follow the policy forever after (for all future timesteps if you prefer). <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/state-value-function-2.jpg" alt="State value function"/> <figcaption>If we take the state with value -7: it's the expected return starting at that state and taking actions according to our policy (greedy policy), so right, right, right, down, down, right, right.</figcaption> </figure> ### **The Action-Value function** In the Action-value function, for each state and action pair, the action-value function **outputs the expected return** if the agent starts in that state and takes action, and then follows the policy forever after. The value of taking action an in state s under a policy π is: <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/action-state-value-function-1.jpg" alt="Action State value function"/> </figure> <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/action-state-value-function-2.jpg" alt="Action State value function"/> </figure> We see that the difference is: - In state-value function, we calculate **the value of a state \\(S_t\\)** - In action-value function, we calculate **the value of the state-action pair ( \\(S_t, A_t\\) ) hence the value of taking that action at that state.** <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/two-types.jpg" alt="Two types of value function"/> <figcaption> Note: We didn't fill all the state-action pairs for the example of Action-value function</figcaption> </figure> In either case, whatever value function we choose (state-value or action-value function), **the value is the expected return.** However, the problem is that it implies that **to calculate EACH value of a state or a state-action pair, we need to sum all the rewards an agent can get if it starts at that state.** This can be a tedious process, and that's **where the Bellman equation comes to help us.** ## **The Bellman Equation: simplify our value estimation** The Bellman equation **simplifies our state value or state-action value calculation.** <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/bellman.jpg" alt="Bellman equation"/> </figure> With what we learned from now, we know that if we calculate the \\(V(S_t)\\) (value of a state), we need to calculate the return starting at that state and then follow the policy forever after. **(Our policy that we defined in the following example is a Greedy Policy, and for simplification, we don't discount the reward).** So to calculate \\(V(S_t)\\), we need to make the sum of the expected rewards. Hence: <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/bellman2.jpg" alt="Bellman equation"/> <figcaption>To calculate the value of State 1: the sum of rewards if the agent started in that state and then followed the greedy policy (taking actions that leads to the best states values) for all the time steps.</figcaption> </figure> Then, to calculate the \\(V(S_{t+1})\\), we need to calculate the return starting at that state \\(S_{t+1}\\). <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/bellman3.jpg" alt="Bellman equation"/> <figcaption>To calculate the value of State 2: the sum of rewards **if the agent started in that state, and then followed the **policy for all the time steps.</figcaption> </figure> So you see, that's a pretty tedious process if you need to do it for each state value or state-action value. Instead of calculating the expected return for each state or each state-action pair, **we can use the Bellman equation.** The Bellman equation is a recursive equation that works like this: instead of starting for each state from the beginning and calculating the return, we can consider the value of any state as: **The immediate reward \\(R_{t+1}\\) + the discounted value of the state that follows ( \\(gamma * V(S_{t+1}) \\) ) .** <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/bellman4.jpg" alt="Bellman equation"/> <figcaption>For simplification here we don’t discount so gamma = 1.</figcaption> </figure> If we go back to our example, the value of State 1= expected cumulative return if we start at that state. <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/bellman2.jpg" alt="Bellman equation"/> </figure> To calculate the value of State 1: the sum of rewards **if the agent started in that state 1** and then followed the **policy for all the time steps.** Which is equivalent to \\(V(S_{t})\\) = Immediate reward \\(R_{t+1}\\) + Discounted value of the next state \\(gamma * V(S_{t+1})\\) <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/bellman6.jpg" alt="Bellman equation"/> </figure> For simplification, here we don't discount, so gamma = 1. - The value of \\(V(S_{t+1}) \\) = Immediate reward \\(R_{t+2}\\) + Discounted value of the next state ( \\(gamma * V(S_{t+2})\\) ). - And so on. To recap, the idea of the Bellman equation is that instead of calculating each value as the sum of the expected return, **which is a long process.** This is equivalent **to the sum of immediate reward + the discounted value of the state that follows.** ## **Monte Carlo vs Temporal Difference Learning** The last thing we need to talk about before diving into Q-Learning is the two ways of learning. Remember that an RL agent **learns by interacting with its environment.** The idea is that **using the experience taken**, given the reward it gets, will **update its value or policy.** Monte Carlo and Temporal Difference Learning are two different **strategies on how to train our value function or our policy function.** Both of them **use experience to solve the RL problem.** On one hand, Monte Carlo uses **an entire episode of experience before learning.** On the other hand, Temporal Difference uses **only a step ( \\(S_t, A_t, R_{t+1}, S_{t+1}\\) ) to learn.** We'll explain both of them **using a value-based method example.** ### **Monte Carlo: learning at the end of the episode** Monte Carlo waits until the end of the episode, calculates \\(G_t\\) (return) and uses it as **a target for updating \\(V(S_t)\\).** So it requires a **complete entire episode of interaction before updating our value function.** <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/monte-carlo-approach.jpg" alt="Monte Carlo"/> </figure> If we take an example: <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/MC-2.jpg" alt="Monte Carlo"/> </figure> - We always start the episode **at the same starting point.** - **The agent takes actions using the policy**. For instance, using an Epsilon Greedy Strategy, a policy that alternates between exploration (random actions) and exploitation. - We get **the reward and the next state.** - We terminate the episode if the cat eats the mouse or if the mouse moves > 10 steps. - At the end of the episode, **we have a list of State, Actions, Rewards, and Next States** - **The agent will sum the total rewards \\(G_t\\)** (to see how well it did). - It will then **update \\(V(s_t)\\) based on the formula** <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/MC-3.jpg" alt="Monte Carlo"/> </figure> - Then **start a new game with this new knowledge** By running more and more episodes, **the agent will learn to play better and better.** <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/MC-3p.jpg" alt="Monte Carlo"/> </figure> For instance, if we train a state-value function using Monte Carlo: - We just started to train our Value function, **so it returns 0 value for each state** - Our learning rate (lr) is 0.1 and our discount rate is 1 (= no discount) - Our mouse **explores the environment and takes random actions** <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/MC-4.jpg" alt="Monte Carlo"/> </figure> - The mouse made more than 10 steps, so the episode ends . <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/MC-4p.jpg" alt="Monte Carlo"/> </figure> - We have a list of state, action, rewards, next_state, **we need to calculate the return \\(G{t}\\)** - \\(G_t = R_{t+1} + R_{t+2} + R_{t+3} ...\\) - \\(G_t = R_{t+1} + R_{t+2} + R_{t+3}…\\) (for simplicity we don’t discount the rewards). - \\(G_t = 1 + 0 + 0 + 0+ 0 + 0 + 1 + 1 + 0 + 0\\) - \\(G_t= 3\\) - We can now update \\(V(S_0)\\): <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/MC-5.jpg" alt="Monte Carlo"/> </figure> - New \\(V(S_0) = V(S_0) + lr * [G_t — V(S_0)]\\) - New \\(V(S_0) = 0 + 0.1 * [3 – 0]\\) - New \\(V(S_0) = 0.3\\) <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/MC-5p.jpg" alt="Monte Carlo"/> </figure> ### **Temporal Difference Learning: learning at each step** - **Temporal difference, on the other hand, waits for only one interaction (one step) \\(S_{t+1}\\)** - to form a TD target and update \\(V(S_t)\\) using \\(R_{t+1}\\) and \\(gamma * V(S_{t+1})\\). The idea with **TD is to update the \\(V(S_t)\\) at each step.** But because we didn't play during an entire episode, we don't have \\(G_t\\) (expected return). Instead, **we estimate \\(G_t\\) by adding \\(R_{t+1}\\) and the discounted value of the next state.** This is called bootstrapping. It's called this **because TD bases its update part on an existing estimate \\(V(S_{t+1})\\) and not a complete sample \\(G_t\\).** <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/TD-1.jpg" alt="Temporal Difference"/> </figure> This method is called TD(0) or **one-step TD (update the value function after any individual step).** <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/TD-1p.jpg" alt="Temporal Difference"/> </figure> If we take the same example, <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/TD-2.jpg" alt="Temporal Difference"/> </figure> - We just started to train our Value function, so it returns 0 value for each state. - Our learning rate (lr) is 0.1, and our discount rate is 1 (no discount). - Our mouse explore the environment and take a random action: **going to the left** - It gets a reward \\(R_{t+1} = 1\\) since **it eats a piece of cheese** <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/TD-2p.jpg" alt="Temporal Difference"/> </figure> <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/TD-3.jpg" alt="Temporal Difference"/> </figure> We can now update \\(V(S_0)\\): New \\(V(S_0) = V(S_0) + lr * [R_1 + gamma * V(S_1) - V(S_0)]\\) New \\(V(S_0) = 0 + 0.1 * [1 + 1 * 0–0]\\) New \\(V(S_0) = 0.1\\) So we just updated our value function for State 0. Now we **continue to interact with this environment with our updated value function.** <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/TD-3p.jpg" alt="Temporal Difference"/> </figure> If we summarize: - With Monte Carlo, we update the value function from a complete episode, and so we **use the actual accurate discounted return of this episode.** - With TD learning, we update the value function from a step, so we replace \\(G_t\\) that we don't have with **an estimated return called TD target.** <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/Summary.jpg" alt="Summary"/> </figure> So now, before diving on Q-Learning, let's summarise what we just learned: We have two types of value-based functions: - State-Value function: outputs the expected return if **the agent starts at a given state and acts accordingly to the policy forever after.** - Action-Value function: outputs the expected return if **the agent starts in a given state, takes a given action at that state** and then acts accordingly to the policy forever after. - In value-based methods, **we define the policy by hand** because we don't train it, we train a value function. The idea is that if we have an optimal value function, we **will have an optimal policy.** There are two types of methods to learn a policy for a value function: - With *the Monte Carlo method*, we update the value function from a complete episode, and so we **use the actual accurate discounted return of this episode.** - With *the TD Learning method,* we update the value function from a step, so we replace Gt that we don't have with **an estimated return called TD target.** <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/summary-learning-mtds.jpg" alt="Summary"/> </figure> --- So that’s all for today. Congrats on finishing this first part of the chapter! There was a lot of information. **That’s normal if you still feel confused with all these elements**. This was the same for me and for all people who studied RL. **Take time to really grasp the material before continuing**. And since the best way to learn and avoid the illusion of competence is **to test yourself**. We wrote a quiz to help you find where **you need to reinforce your study**. Check your knowledge here 👉 https://github.com/huggingface/deep-rl-class/blob/main/unit2/quiz1.md <a href="https://huggingface.co/blog/deep-rl-q-part2">In the second part , we’ll study our first RL algorithm: Q-Learning</a>, and implement our first RL Agent in two environments: 1. Frozen-Lake-v1 (non-slippery version): where our agent will need to **go from the starting state (S) to the goal state (G)** by walking only on frozen tiles (F) and avoiding holes (H). 2. An autonomous taxi will need **to learn to navigate** a city to **transport its passengers from point A to point B.** <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/envs.gif" alt="Environments"/> </figure> And don't forget to share with your friends who want to learn 🤗 ! Finally, we want **to improve and update the course iteratively with your feedback**. If you have some, please fill this form 👉 https://forms.gle/3HgA7bEHwAmmLfwh9 ### Keep learning, stay awesome,
huggingface/blog/blob/main/deep-rl-q-part1.md
Metric Card for CUAD ## Metric description This metric wraps the official scoring script for version 1 of the [Contract Understanding Atticus Dataset (CUAD)](https://huggingface.co/datasets/cuad), which is a corpus of more than 13,000 labels in 510 commercial legal contracts that have been manually labeled to identify 41 categories of important clauses that lawyers look for when reviewing contracts in connection with corporate transactions. The CUAD metric computes several scores: [Exact Match](https://huggingface.co/metrics/exact_match), [F1 score](https://huggingface.co/metrics/f1), Area Under the Precision-Recall Curve, [Precision](https://huggingface.co/metrics/precision) at 80% [recall](https://huggingface.co/metrics/recall) and Precision at 90% recall. ## How to use The CUAD metric takes two inputs : `predictions`, a list of question-answer dictionaries with the following key-values: - `id`: the id of the question-answer pair as given in the references. - `prediction_text`: a list of possible texts for the answer, as a list of strings depending on a threshold on the confidence probability of each prediction. `references`: a list of question-answer dictionaries with the following key-values: - `id`: the id of the question-answer pair (the same as above). - `answers`: a dictionary *in the CUAD dataset format* with the following keys: - `text`: a list of possible texts for the answer, as a list of strings. - `answer_start`: a list of start positions for the answer, as a list of ints. Note that `answer_start` values are not taken into account to compute the metric. ```python from datasets import load_metric cuad_metric = load_metric("cuad") predictions = [{'prediction_text': ['The seller:', 'The buyer/End-User: Shenzhen LOHAS Supply Chain Management Co., Ltd.'], 'id': 'LohaCompanyltd_20191209_F-1_EX-10.16_11917878_EX-10.16_Supply Agreement__Parties'}] references = [{'answers': {'answer_start': [143, 49], 'text': ['The seller:', 'The buyer/End-User: Shenzhen LOHAS Supply Chain Management Co., Ltd.']}, 'id': 'LohaCompanyltd_20191209_F-1_EX-10.16_11917878_EX-10.16_Supply Agreement__Parties'}] results = cuad_metric.compute(predictions=predictions, references=references) ``` ## Output values The output of the CUAD metric consists of a dictionary that contains one or several of the following metrics: `exact_match`: The normalized answers that exactly match the reference answer, with a range between 0.0 and 1.0 (see [exact match](https://huggingface.co/metrics/exact_match) for more information). `f1`: The harmonic mean of the precision and recall (see [F1 score](https://huggingface.co/metrics/f1) for more information). Its range is between 0.0 and 1.0 -- its lowest possible value is 0, if either the precision or the recall is 0, and its highest possible value is 1.0, which means perfect precision and recall. `aupr`: The Area Under the Precision-Recall curve, with a range between 0.0 and 1.0, with a higher value representing both high recall and high precision, and a low value representing low values for both. See the [Wikipedia article](https://en.wikipedia.org/wiki/Receiver_operating_characteristic#Area_under_the_curve) for more information. `prec_at_80_recall`: The fraction of true examples among the predicted examples at a recall rate of 80%. Its range is between 0.0 and 1.0. For more information, see [precision](https://huggingface.co/metrics/precision) and [recall](https://huggingface.co/metrics/recall). `prec_at_90_recall`: The fraction of true examples among the predicted examples at a recall rate of 90%. Its range is between 0.0 and 1.0. ### Values from popular papers The [original CUAD paper](https://arxiv.org/pdf/2103.06268.pdf) reports that a [DeBERTa model](https://huggingface.co/microsoft/deberta-base) attains an AUPR of 47.8%, a Precision at 80% Recall of 44.0%, and a Precision at 90% Recall of 17.8% (they do not report F1 or Exact Match separately). For more recent model performance, see the [dataset leaderboard](https://paperswithcode.com/dataset/cuad). ## Examples Maximal values : ```python from datasets import load_metric cuad_metric = load_metric("cuad") predictions = [{'prediction_text': ['The seller:', 'The buyer/End-User: Shenzhen LOHAS Supply Chain Management Co., Ltd.'], 'id': 'LohaCompanyltd_20191209_F-1_EX-10.16_11917878_EX-10.16_Supply Agreement__Parties'}] references = [{'answers': {'answer_start': [143, 49], 'text': ['The seller:', 'The buyer/End-User: Shenzhen LOHAS Supply Chain Management Co., Ltd.']}, 'id': 'LohaCompanyltd_20191209_F-1_EX-10.16_11917878_EX-10.16_Supply Agreement__Parties'}] results = cuad_metric.compute(predictions=predictions, references=references) print(results) {'exact_match': 100.0, 'f1': 100.0, 'aupr': 0.0, 'prec_at_80_recall': 1.0, 'prec_at_90_recall': 1.0} ``` Minimal values: ```python from datasets import load_metric cuad_metric = load_metric("cuad") predictions = [{'prediction_text': ['The Company appoints the Distributor as an exclusive distributor of Products in the Market, subject to the terms and conditions of this Agreement.'], 'id': 'LIMEENERGYCO_09_09_1999-EX-10-DISTRIBUTOR AGREEMENT__Exclusivity_0'}] references = [{'answers': {'answer_start': [143], 'text': 'The seller'}, 'id': 'LIMEENERGYCO_09_09_1999-EX-10-DISTRIBUTOR AGREEMENT__Exclusivity_0'}] results = cuad_metric.compute(predictions=predictions, references=references) print(results) {'exact_match': 0.0, 'f1': 0.0, 'aupr': 0.0, 'prec_at_80_recall': 0, 'prec_at_90_recall': 0} ``` Partial match: ```python from datasets import load_metric cuad_metric = load_metric("cuad") predictions = [{'prediction_text': ['The seller:', 'The buyer/End-User: Shenzhen LOHAS Supply Chain Management Co., Ltd.'], 'id': 'LohaCompanyltd_20191209_F-1_EX-10.16_11917878_EX-10.16_Supply Agreement__Parties'}] predictions = [{'prediction_text': ['The Company appoints the Distributor as an exclusive distributor of Products in the Market, subject to the terms and conditions of this Agreement.', 'The buyer/End-User: Shenzhen LOHAS Supply Chain Management Co., Ltd.'], 'id': 'LohaCompanyltd_20191209_F-1_EX-10.16_11917878_EX-10.16_Supply Agreement__Parties'}] results = cuad_metric.compute(predictions=predictions, references=references) print(results) {'exact_match': 100.0, 'f1': 50.0, 'aupr': 0.0, 'prec_at_80_recall': 0, 'prec_at_90_recall': 0} ``` ## Limitations and bias This metric works only with datasets that have the same format as the [CUAD dataset](https://huggingface.co/datasets/cuad). The limitations of the biases of this dataset are not discussed, but could exhibit annotation bias given the homogeneity of annotators for this dataset. In terms of the metric itself, the accuracy of AUPR has been debated because its estimates are quite noisy and because of the fact that reducing the Precision-Recall Curve to a single number ignores the fact that it is about the tradeoffs between the different systems or performance points plotted and not the performance of an individual system. Reporting the original F1 and exact match scores is therefore useful to ensure a more complete representation of system performance. ## Citation ```bibtex @article{hendrycks2021cuad, title={CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review}, author={Dan Hendrycks and Collin Burns and Anya Chen and Spencer Ball}, journal={arXiv preprint arXiv:2103.06268}, year={2021} } ``` ## Further References - [CUAD dataset homepage](https://www.atticusprojectai.org/cuad-v1-performance-announcements)
huggingface/datasets/blob/main/metrics/cuad/README.md
!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # XLM-RoBERTa-XL ## Overview The XLM-RoBERTa-XL model was proposed in [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) by Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau. The abstract from the paper is the following: *Recent work has demonstrated the effectiveness of cross-lingual language model pretraining for cross-lingual understanding. In this study, we present the results of two larger multilingual masked language models, with 3.5B and 10.7B parameters. Our two new models dubbed XLM-R XL and XLM-R XXL outperform XLM-R by 1.8% and 2.4% average accuracy on XNLI. Our model also outperforms the RoBERTa-Large model on several English tasks of the GLUE benchmark by 0.3% on average while handling 99 more languages. This suggests pretrained models with larger capacity may obtain both strong performance on high-resource languages while greatly improving low-resource languages. We make our code and models publicly available.* This model was contributed by [Soonhwan-Kwon](https://github.com/Soonhwan-Kwon) and [stefan-it](https://huggingface.co/stefan-it). The original code can be found [here](https://github.com/pytorch/fairseq/tree/master/examples/xlmr). ## Usage tips XLM-RoBERTa-XL is a multilingual model trained on 100 different languages. Unlike some XLM multilingual models, it does not require `lang` tensors to understand which language is used, and should be able to determine the correct language from the input ids. ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Causal language modeling task guide](../tasks/language_modeling) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice) ## XLMRobertaXLConfig [[autodoc]] XLMRobertaXLConfig ## XLMRobertaXLModel [[autodoc]] XLMRobertaXLModel - forward ## XLMRobertaXLForCausalLM [[autodoc]] XLMRobertaXLForCausalLM - forward ## XLMRobertaXLForMaskedLM [[autodoc]] XLMRobertaXLForMaskedLM - forward ## XLMRobertaXLForSequenceClassification [[autodoc]] XLMRobertaXLForSequenceClassification - forward ## XLMRobertaXLForMultipleChoice [[autodoc]] XLMRobertaXLForMultipleChoice - forward ## XLMRobertaXLForTokenClassification [[autodoc]] XLMRobertaXLForTokenClassification - forward ## XLMRobertaXLForQuestionAnswering [[autodoc]] XLMRobertaXLForQuestionAnswering - forward
huggingface/transformers/blob/main/docs/source/en/model_doc/xlm-roberta-xl.md
Gradio Demo: timeseries-forecasting-with-prophet ### A simple dashboard showing pypi stats for python libraries. Updates on load, and has no buttons! ``` !pip install -q gradio holidays==0.24 prophet==1.1.2 pandas pypistats plotly ``` ``` import gradio as gr import pypistats from datetime import date from dateutil.relativedelta import relativedelta import pandas as pd from prophet import Prophet pd.options.plotting.backend = "plotly" def get_forecast(lib, time): data = pypistats.overall(lib, total=True, format="pandas") data = data.groupby("category").get_group("with_mirrors").sort_values("date") start_date = date.today() - relativedelta(months=int(time.split(" ")[0])) df = data[(data['date'] > str(start_date))] df1 = df[['date','downloads']] df1.columns = ['ds','y'] m = Prophet() m.fit(df1) future = m.make_future_dataframe(periods=90) forecast = m.predict(future) fig1 = m.plot(forecast) return fig1 with gr.Blocks() as demo: gr.Markdown( """ **Pypi Download Stats 📈 with Prophet Forecasting**: see live download stats for popular open-source libraries 🤗 along with a 3 month forecast using Prophet. The [ source code for this Gradio demo is here](https://huggingface.co/spaces/gradio/timeseries-forecasting-with-prophet/blob/main/app.py). """) with gr.Row(): lib = gr.Dropdown(["pandas", "scikit-learn", "torch", "prophet"], label="Library", value="pandas") time = gr.Dropdown(["3 months", "6 months", "9 months", "12 months"], label="Downloads over the last...", value="12 months") plt = gr.Plot() lib.change(get_forecast, [lib, time], plt, queue=False) time.change(get_forecast, [lib, time], plt, queue=False) demo.load(get_forecast, [lib, time], plt, queue=False) demo.launch() ```
gradio-app/gradio/blob/main/demo/timeseries-forecasting-with-prophet/run.ipynb
Custom Components in 5 minutes Gradio 4.0 introduces Custom Components -- the ability for developers to create their own custom components and use them in Gradio apps. You can publish your components as Python packages so that other users can use them as well. Users will be able to use all of Gradio's existing functions, such as `gr.Blocks`, `gr.Interface`, API usage, themes, etc. with Custom Components. This guide will cover how to get started making custom components. ## Installation You will need to have: * Python 3.8+ (<a href="https://www.python.org/downloads/" target="_blank">install here</a>) * Node.js v16.14+ (<a href="https://nodejs.dev/en/download/package-manager/" target="_blank">install here</a>) * npm 9+ (<a href="https://docs.npmjs.com/downloading-and-installing-node-js-and-npm/" target="_blank">install here</a>) * Gradio 4.0+ (`pip install --upgrade gradio`) ## The Workflow The Custom Components workflow consists of 4 steps: create, dev, build, and publish. 1. create: creates a template for you to start developing a custom component. 2. dev: launches a development server with a sample app & hot reloading allowing you to easily develop your custom component 3. build: builds a python package containing to your custom component's Python and JavaScript code -- this makes things official! 4. publish: uploads your package to [PyPi](https://pypi.org/) and/or a sample app to [HuggingFace Spaces](https://hf.co/spaces). Each of these steps is done via the Custom Component CLI. You can invoke it with `gradio cc` or `gradio component` Tip: Run `gradio cc --help` to get a help menu of all available commands. You can also append `--help` to any command name to bring up a help page for that command, e.g. `gradio cc create --help`. ## 1. create Bootstrap a new template by running the following in any working directory: ```bash gradio cc create MyComponent --template SimpleTextbox ``` Instead of `MyComponent`, give your component any name. Instead of `SimpleTextbox`, you can use any Gradio component as a template. `SimpleTextbox` is actually a special component that a stripped-down version of the `Textbox` component that makes it particularly useful when creating your first custom component. Some other components that are good if you are starting out: `SimpleDropdown` or `File`. Tip: Run `gradio cc show` to get a list of available component templates. The `create` command will: 1. Create a directory with your component's name in lowercase with the following structure: ```directory - backend/ <- The python code for your custom component - frontend/ <- The javascript code for your custom component - demo/ <- A sample app using your custom component. Modify this to develop your component! - pyproject.toml <- Used to build the package and specify package metadata. ``` 2. Install the component in development mode Each of the directories will have the code you need to get started developing! ## 2. dev Once you have created your new component, you can start a development server by `entering the directory` and running ```bash gradio cc dev ``` You'll see several lines that are printed to the console. The most important one is the one that says: > Frontend Server (Go here): http://localhost:7861/ The port number might be different for you. Click on that link to launch the demo app in hot reload mode. Now, you can start making changes to the backend and frontend you'll see the results reflected live in the sample app! We'll go through a real example in a later guide. Tip: You don't have to run dev mode from your custom component directory. The first argument to `dev` mode is the path to the directory. By default it uses the current directory. ## 3. build Once you are satisfied with your custom component's implementation, you can `build` it to use it outside of the development server. From your component directory, run: ```bash gradio cc build ``` This will create a `tar.gz` and `.whl` file in a `dist/` subdirectory. If you or anyone installs that `.whl` file (`pip install <path-to-whl>`) they will be able to use your custom component in any gradio app! ## 4. publish Right now, your package is only available on a `.whl` file on your computer. You can share that file with the world with the `publish` command! Simply run the following command from your component directory: ```bash gradio cc publish ``` This will guide you through the following process: 1. Upload your distribution files to PyPi. This is optional. If you decide to upload to PyPi, you will need a PyPI username and password. You can get one [here](https://pypi.org/account/register/). 2. Upload a demo of your component to hugging face spaces. This is also optional. Here is an example of what publishing looks like: <video autoplay muted loop> <source src="https://gradio-builds.s3.amazonaws.com/assets/text_with_attachments_publish.mov" type="video/mp4" /> </video> ## Conclusion Now that you know the high-level workflow of creating custom components, you can go in depth in the next guides! After reading the guides, check out this [collection](https://huggingface.co/collections/gradio/custom-components-65497a761c5192d981710b12) of custom components on the HuggingFace Hub so you can learn from other's code.
gradio-app/gradio/blob/main/guides/05_custom-components/01_custom-components-in-five-minutes.md
!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Export to TFLite [TensorFlow Lite](https://www.tensorflow.org/lite/guide) is a lightweight framework for deploying machine learning models on resource-constrained devices, such as mobile phones, embedded systems, and Internet of Things (IoT) devices. TFLite is designed to optimize and run models efficiently on these devices with limited computational power, memory, and power consumption. A TensorFlow Lite model is represented in a special efficient portable format identified by the `.tflite` file extension. 🤗 Optimum offers functionality to export 🤗 Transformers models to TFLite through the `exporters.tflite` module. For the list of supported model architectures, please refer to [🤗 Optimum documentation](https://huggingface.co/docs/optimum/exporters/tflite/overview). To export a model to TFLite, install the required dependencies: ```bash pip install optimum[exporters-tf] ``` To check out all available arguments, refer to the [🤗 Optimum docs](https://huggingface.co/docs/optimum/main/en/exporters/tflite/usage_guides/export_a_model), or view help in command line: ```bash optimum-cli export tflite --help ``` To export a model's checkpoint from the 🤗 Hub, for example, `bert-base-uncased`, run the following command: ```bash optimum-cli export tflite --model bert-base-uncased --sequence_length 128 bert_tflite/ ``` You should see the logs indicating progress and showing where the resulting `model.tflite` is saved, like this: ```bash Validating TFLite model... -[✓] TFLite model output names match reference model (logits) - Validating TFLite Model output "logits": -[✓] (1, 128, 30522) matches (1, 128, 30522) -[x] values not close enough, max diff: 5.817413330078125e-05 (atol: 1e-05) The TensorFlow Lite export succeeded with the warning: The maximum absolute difference between the output of the reference model and the TFLite exported model is not within the set tolerance 1e-05: - logits: max diff = 5.817413330078125e-05. The exported model was saved at: bert_tflite ``` The example above illustrates exporting a checkpoint from 🤗 Hub. When exporting a local model, first make sure that you saved both the model's weights and tokenizer files in the same directory (`local_path`). When using CLI, pass the `local_path` to the `model` argument instead of the checkpoint name on 🤗 Hub.
huggingface/transformers/blob/main/docs/source/en/tflite.md
An Introduction to Unity ML-Agents [[introduction-to-ml-agents]] <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit7/thumbnail.png" alt="thumbnail"/> One of the challenges in Reinforcement Learning is **creating environments**. Fortunately for us, we can use game engines to do so. These engines, such as [Unity](https://unity.com/), [Godot](https://godotengine.org/) or [Unreal Engine](https://www.unrealengine.com/), are programs made to create video games. They are perfectly suited for creating environments: they provide physics systems, 2D/3D rendering, and more. One of them, [Unity](https://unity.com/), created the [Unity ML-Agents Toolkit](https://github.com/Unity-Technologies/ml-agents), a plugin based on the game engine Unity that allows us **to use the Unity Game Engine as an environment builder to train agents**. In the first bonus unit, this is what we used to train Huggy to catch a stick! <figure> <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit5/example-envs.png" alt="MLAgents environments"/> <figcaption>Source: <a href="https://github.com/Unity-Technologies/ml-agents">ML-Agents documentation</a></figcaption> </figure> Unity ML-Agents Toolkit provides many exceptional pre-made environments, from playing football (soccer), learning to walk, and jumping over big walls. In this Unit, we'll learn to use ML-Agents, but **don't worry if you don't know how to use the Unity Game Engine**: you don't need to use it to train your agents. So, today, we're going to train two agents: - The first one will learn to **shoot snowballs onto a spawning target**. - The second needs to **press a button to spawn a pyramid, then navigate to the pyramid, knock it over, and move to the gold brick at the top**. To do that, it will need to explore its environment, which will be done using a technique called curiosity. <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit7/envs.png" alt="Environments" /> Then, after training, **you'll push the trained agents to the Hugging Face Hub**, and you'll be able to **visualize them playing directly on your browser without having to use the Unity Editor**. Doing this Unit will **prepare you for the next challenge: AI vs. AI where you will train agents in multi-agents environments and compete against your classmates' agents**. Sound exciting? Let's get started!
huggingface/deep-rl-class/blob/main/units/en/unit5/introduction.mdx
``python import argparse import os import torch from torch.optim import AdamW from torch.utils.data import DataLoader from peft import ( get_peft_config, get_peft_model, get_peft_model_state_dict, set_peft_model_state_dict, PeftType, PrefixTuningConfig, PromptEncoderConfig, ) import evaluate from datasets import load_dataset from transformers import AutoModelForSequenceClassification, AutoTokenizer, get_linear_schedule_with_warmup, set_seed from tqdm import tqdm ``` ```python batch_size = 32 model_name_or_path = "roberta-large" task = "mrpc" peft_type = PeftType.PREFIX_TUNING device = "cuda" num_epochs = 20 ``` ```python peft_config = PrefixTuningConfig(task_type="SEQ_CLS", num_virtual_tokens=20) lr = 1e-2 ``` ```python if any(k in model_name_or_path for k in ("gpt", "opt", "bloom")): padding_side = "left" else: padding_side = "right" tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, padding_side=padding_side) if getattr(tokenizer, "pad_token_id") is None: tokenizer.pad_token_id = tokenizer.eos_token_id datasets = load_dataset("glue", task) metric = evaluate.load("glue", task) def tokenize_function(examples): # max_length=None => use the model max length (it's actually the default) outputs = tokenizer(examples["sentence1"], examples["sentence2"], truncation=True, max_length=None) return outputs tokenized_datasets = datasets.map( tokenize_function, batched=True, remove_columns=["idx", "sentence1", "sentence2"], ) # We also rename the 'label' column to 'labels' which is the expected name for labels by the models of the # transformers library tokenized_datasets = tokenized_datasets.rename_column("label", "labels") def collate_fn(examples): return tokenizer.pad(examples, padding="longest", return_tensors="pt") # Instantiate dataloaders. train_dataloader = DataLoader(tokenized_datasets["train"], shuffle=True, collate_fn=collate_fn, batch_size=batch_size) eval_dataloader = DataLoader( tokenized_datasets["validation"], shuffle=False, collate_fn=collate_fn, batch_size=batch_size ) ``` ```python model = AutoModelForSequenceClassification.from_pretrained(model_name_or_path, return_dict=True) model = get_peft_model(model, peft_config) model.print_trainable_parameters() model ``` ```python optimizer = AdamW(params=model.parameters(), lr=lr) # Instantiate scheduler lr_scheduler = get_linear_schedule_with_warmup( optimizer=optimizer, num_warmup_steps=0.06 * (len(train_dataloader) * num_epochs), num_training_steps=(len(train_dataloader) * num_epochs), ) ``` ```python model.to(device) for epoch in range(num_epochs): model.train() for step, batch in enumerate(tqdm(train_dataloader)): batch.to(device) outputs = model(**batch) loss = outputs.loss loss.backward() optimizer.step() lr_scheduler.step() optimizer.zero_grad() model.eval() for step, batch in enumerate(tqdm(eval_dataloader)): batch.to(device) with torch.no_grad(): outputs = model(**batch) predictions = outputs.logits.argmax(dim=-1) predictions, references = predictions, batch["labels"] metric.add_batch( predictions=predictions, references=references, ) eval_metric = metric.compute() print(f"epoch {epoch}:", eval_metric) ``` ## Share adapters on the 🤗 Hub ```python model.push_to_hub("smangrul/roberta-large-peft-prefix-tuning", use_auth_token=True) ``` ## Load adapters from the Hub You can also directly load adapters from the Hub using the commands below: ```python import torch from peft import PeftModel, PeftConfig from transformers import AutoModelForCausalLM, AutoTokenizer peft_model_id = "smangrul/roberta-large-peft-prefix-tuning" config = PeftConfig.from_pretrained(peft_model_id) inference_model = AutoModelForSequenceClassification.from_pretrained(config.base_model_name_or_path) tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path) # Load the Lora model inference_model = PeftModel.from_pretrained(inference_model, peft_model_id) inference_model.to(device) inference_model.eval() for step, batch in enumerate(tqdm(eval_dataloader)): batch.to(device) with torch.no_grad(): outputs = inference_model(**batch) predictions = outputs.logits.argmax(dim=-1) predictions, references = predictions, batch["labels"] metric.add_batch( predictions=predictions, references=references, ) eval_metric = metric.compute() print(eval_metric) ```
huggingface/peft/blob/main/examples/sequence_classification/prefix_tuning.ipynb
-- title: Introducing the Hugging Face LLM Inference Container for Amazon SageMaker thumbnail: /blog/assets/145_sagemaker-huggingface-llm/thumbnail.jpg authors: - user: philschmid --- # Introducing the Hugging Face LLM Inference Container for Amazon SageMaker This is an example on how to deploy the open-source LLMs, like [BLOOM](https://huggingface.co/bigscience/bloom) to Amazon SageMaker for inference using the new Hugging Face LLM Inference Container. We will deploy the 12B [Pythia Open Assistant Model](https://huggingface.co/OpenAssistant/pythia-12b-sft-v8-7k-steps), an open-source Chat LLM trained with the Open Assistant dataset. The example covers: 1. [Setup development environment](#1-setup-development-environment) 2. [Retrieve the new Hugging Face LLM DLC](#2-retrieve-the-new-hugging-face-llm-dlc) 3. [Deploy Open Assistant 12B to Amazon SageMaker](#3-deploy-deploy-open-assistant-12b-to-amazon-sagemaker) 4. [Run inference and chat with our model](#4-run-inference-and-chat-with-our-model) 5. [Create Gradio Chatbot backed by Amazon SageMaker](#5-create-gradio-chatbot-backed-by-amazon-sagemaker) You can find the code for the example also in the [notebooks repository](https://github.com/huggingface/notebooks/blob/main/sagemaker/27_deploy_large_language_models/sagemaker-notebook.ipynb). ## What is Hugging Face LLM Inference DLC? Hugging Face LLM DLC is a new purpose-built Inference Container to easily deploy LLMs in a secure and managed environment. The DLC is powered by [Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference), an open-source, purpose-built solution for deploying and serving Large Language Models (LLMs). TGI enables high-performance text generation using Tensor Parallelism and dynamic batching for the most popular open-source LLMs, including StarCoder, BLOOM, GPT-NeoX, Llama, and T5. Text Generation Inference is already used by customers such as IBM, Grammarly, and the Open-Assistant initiative implements optimization for all supported model architectures, including: - Tensor Parallelism and custom cuda kernels - Optimized transformers code for inference using [flash-attention](https://github.com/HazyResearch/flash-attention) on the most popular architectures - Quantization with [bitsandbytes](https://github.com/TimDettmers/bitsandbytes) - [Continuous batching of incoming requests](https://github.com/huggingface/text-generation-inference/tree/main/router) for increased total throughput - Accelerated weight loading (start-up time) with [safetensors](https://github.com/huggingface/safetensors) - Logits warpers (temperature scaling, topk, repetition penalty ...) - Watermarking with [A Watermark for Large Language Models](https://arxiv.org/abs/2301.10226) - Stop sequences, Log probabilities - Token streaming using Server-Sent Events (SSE) Officially supported model architectures are currently: - [BLOOM](https://huggingface.co/bigscience/bloom) / [BLOOMZ](https://huggingface.co/bigscience/bloomz) - [MT0-XXL](https://huggingface.co/bigscience/mt0-xxl) - [Galactica](https://huggingface.co/facebook/galactica-120b) - [SantaCoder](https://huggingface.co/bigcode/santacoder) - [GPT-Neox 20B](https://huggingface.co/EleutherAI/gpt-neox-20b) (joi, pythia, lotus, rosey, chip, RedPajama, open assistant) - [FLAN-T5-XXL](https://huggingface.co/google/flan-t5-xxl) (T5-11B) - [Llama](https://github.com/facebookresearch/llama) (vicuna, alpaca, koala) - [Starcoder](https://huggingface.co/bigcode/starcoder) / [SantaCoder](https://huggingface.co/bigcode/santacoder) - [Falcon 7B](https://huggingface.co/tiiuae/falcon-7b) / [Falcon 40B](https://huggingface.co/tiiuae/falcon-40b) With the new Hugging Face LLM Inference DLCs on Amazon SageMaker, AWS customers can benefit from the same technologies that power highly concurrent, low latency LLM experiences like [HuggingChat](https://hf.co/chat), [OpenAssistant](https://open-assistant.io/), and Inference API for LLM models on the Hugging Face Hub. Let's get started! ## 1. Setup development environment We are going to use the `sagemaker` python SDK to deploy BLOOM to Amazon SageMaker. We need to make sure to have an AWS account configured and the `sagemaker` python SDK installed. ```python !pip install "sagemaker==2.175.0" --upgrade --quiet ``` If you are going to use Sagemaker in a local environment, you need access to an IAM Role with the required permissions for Sagemaker. You can find [here](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) more about it. ```python import sagemaker import boto3 sess = sagemaker.Session() # sagemaker session bucket -> used for uploading data, models and logs # sagemaker will automatically create this bucket if it not exists sagemaker_session_bucket=None if sagemaker_session_bucket is None and sess is not None: # set to default bucket if a bucket name is not given sagemaker_session_bucket = sess.default_bucket() try: role = sagemaker.get_execution_role() except ValueError: iam = boto3.client('iam') role = iam.get_role(RoleName='sagemaker_execution_role')['Role']['Arn'] sess = sagemaker.Session(default_bucket=sagemaker_session_bucket) print(f"sagemaker role arn: {role}") print(f"sagemaker session region: {sess.boto_region_name}") ``` ## 2. Retrieve the new Hugging Face LLM DLC Compared to deploying regular Hugging Face models, we first need to retrieve the container uri and provide it to our `HuggingFaceModel` model class with a `image_uri` pointing to the image. To retrieve the new Hugging Face LLM DLC in Amazon SageMaker, we can use the `get_huggingface_llm_image_uri` method provided by the `sagemaker` SDK. This method allows us to retrieve the URI for the desired Hugging Face LLM DLC based on the specified `backend`, `session`, `region`, and `version`. You can find the available versions [here](https://github.com/aws/deep-learning-containers/blob/master/available_images.md#huggingface-text-generation-inference-containers) ```python from sagemaker.huggingface import get_huggingface_llm_image_uri # retrieve the llm image uri llm_image = get_huggingface_llm_image_uri( "huggingface", version="1.0.3" ) # print ecr image uri print(f"llm image uri: {llm_image}") ``` ## 3. Deploy Open Assistant 12B to Amazon SageMaker _Note: Quotas for Amazon SageMaker can vary between accounts. If you receive an error indicating you've exceeded your quota, you can increase them through the [Service Quotas console](https://console.aws.amazon.com/servicequotas/home/services/sagemaker/quotas)._ To deploy [Open Assistant Model](OpenAssistant/pythia-12b-sft-v8-7k-steps) to Amazon SageMaker we create a `HuggingFaceModel` model class and define our endpoint configuration including the `hf_model_id`, `instance_type` etc. We will use a `g5.12xlarge` instance type, which has 4 NVIDIA A10G GPUs and 96GB of GPU memory. _Note: We could also optimize the deployment for cost and use `g5.2xlarge` instance type and enable int-8 quantization._ ```python import json from sagemaker.huggingface import HuggingFaceModel # sagemaker config instance_type = "ml.g5.12xlarge" number_of_gpu = 4 health_check_timeout = 300 # Define Model and Endpoint configuration parameter config = { 'HF_MODEL_ID': "OpenAssistant/pythia-12b-sft-v8-7k-steps", # model_id from hf.co/models 'SM_NUM_GPUS': json.dumps(number_of_gpu), # Number of GPU used per replica 'MAX_INPUT_LENGTH': json.dumps(1024), # Max length of input text 'MAX_TOTAL_TOKENS': json.dumps(2048), # Max length of the generation (including input text) # 'HF_MODEL_QUANTIZE': "bitsandbytes", # comment in to quantize } # create HuggingFaceModel with the image uri llm_model = HuggingFaceModel( role=role, image_uri=llm_image, env=config ) ``` After we have created the `HuggingFaceModel` we can deploy it to Amazon SageMaker using the `deploy` method. We will deploy the model with the `ml.g5.12xlarge` instance type. TGI will automatically distribute and shard the model across all GPUs. ```python # Deploy model to an endpoint # https://sagemaker.readthedocs.io/en/stable/api/inference/model.html#sagemaker.model.Model.deploy llm = llm_model.deploy( initial_instance_count=1, instance_type=instance_type, # volume_size=400, # If using an instance with local SSD storage, volume_size must be None, e.g. p4 but not p3 container_startup_health_check_timeout=health_check_timeout, # 10 minutes to be able to load the model ) ``` SageMaker will now create our endpoint and deploy the model to it. This can take 5-10 minutes. ## 4. Run inference and chat with our model After our endpoint is deployed we can run inference on it using the `predict` method from the `predictor`. We can use different parameters to control the generation, defining them in the `parameters` attribute of the payload. As of today TGI supports the following parameters: - `temperature`: Controls randomness in the model. Lower values will make the model more deterministic and higher values will make the model more random. Default value is 1.0. - `max_new_tokens`: The maximum number of tokens to generate. Default value is 20, max value is 512. - `repetition_penalty`: Controls the likelihood of repetition, defaults to `null`. - `seed`: The seed to use for random generation, default is `null`. - `stop`: A list of tokens to stop the generation. The generation will stop when one of the tokens is generated. - `top_k`: The number of highest probability vocabulary tokens to keep for top-k-filtering. Default value is `null`, which disables top-k-filtering. - `top_p`: The cumulative probability of parameter highest probability vocabulary tokens to keep for nucleus sampling, default to `null` - `do_sample`: Whether or not to use sampling; use greedy decoding otherwise. Default value is `false`. - `best_of`: Generate best_of sequences and return the one if the highest token logprobs, default to `null`. - `details`: Whether or not to return details about the generation. Default value is `false`. - `return_full_text`: Whether or not to return the full text or only the generated part. Default value is `false`. - `truncate`: Whether or not to truncate the input to the maximum length of the model. Default value is `true`. - `typical_p`: The typical probability of a token. Default value is `null`. - `watermark`: The watermark to use for the generation. Default value is `false`. You can find the open api specification of TGI in the [swagger documentation](https://huggingface.github.io/text-generation-inference/) The `OpenAssistant/pythia-12b-sft-v8-7k-steps` is a conversational chat model meaning we can chat with it using the following prompt: ``` <|prompter|>[Instruction]<|endoftext|> <|assistant|> ``` lets give it a first try and ask about some cool ideas to do in the summer: ```python chat = llm.predict({ "inputs": """<|prompter|>What are some cool ideas to do in the summer?<|endoftext|><|assistant|>""" }) print(chat[0]["generated_text"]) # <|prompter|>What are some cool ideas to do in the summer?<|endoftext|><|assistant|>There are many fun and exciting things you can do in the summer. Here are some ideas: ``` Now we will show how to use generation parameters in the `parameters` attribute of the payload. In addition to setting custom `temperature`, `top_p`, etc, we also stop generation after the turn of the `bot`. ```python # define payload prompt="""<|prompter|>How can i stay more active during winter? Give me 3 tips.<|endoftext|><|assistant|>""" # hyperparameters for llm payload = { "inputs": prompt, "parameters": { "do_sample": True, "top_p": 0.7, "temperature": 0.7, "top_k": 50, "max_new_tokens": 256, "repetition_penalty": 1.03, "stop": ["<|endoftext|>"] } } # send request to endpoint response = llm.predict(payload) # print(response[0]["generated_text"][:-len("<human>:")]) print(response[0]["generated_text"]) ``` ## 5. Create Gradio Chatbot backed by Amazon SageMaker We can also create a gradio application to chat with our model. Gradio is a python library that allows you to quickly create customizable UI components around your machine learning models. You can find more about gradio [here](https://gradio.app/). ```python !pip install gradio --upgrade ``` ```python import gradio as gr # hyperparameters for llm parameters = { "do_sample": True, "top_p": 0.7, "temperature": 0.7, "top_k": 50, "max_new_tokens": 256, "repetition_penalty": 1.03, "stop": ["<|endoftext|>"] } with gr.Blocks() as demo: gr.Markdown("## Chat with Amazon SageMaker") with gr.Column(): chatbot = gr.Chatbot() with gr.Row(): with gr.Column(): message = gr.Textbox(label="Chat Message Box", placeholder="Chat Message Box", show_label=False) with gr.Column(): with gr.Row(): submit = gr.Button("Submit") clear = gr.Button("Clear") def respond(message, chat_history): # convert chat history to prompt converted_chat_history = "" if len(chat_history) > 0: for c in chat_history: converted_chat_history += f"<|prompter|>{c[0]}<|endoftext|><|assistant|>{c[1]}<|endoftext|>" prompt = f"{converted_chat_history}<|prompter|>{message}<|endoftext|><|assistant|>" # send request to endpoint llm_response = llm.predict({"inputs": prompt, "parameters": parameters}) # remove prompt from response parsed_response = llm_response[0]["generated_text"][len(prompt):] chat_history.append((message, parsed_response)) return "", chat_history submit.click(respond, [message, chatbot], [message, chatbot], queue=False) clear.click(lambda: None, None, chatbot, queue=False) demo.launch(share=True) ``` ![Gradio Chat application](assets/145_sagemaker-huggingface-llm/gradio.png "Gradio Chat application") Awesome! 🚀 We have successfully deployed Open Assistant Model to Amazon SageMaker and run inference on it. Additionally, we have built a quick gradio application to chat with our model. Now, it's time for you to try it out yourself and build Generation AI applications with the new Hugging Face LLM DLC on Amazon SageMaker. To clean up, we can delete the model and endpoint. ```python llm.delete_model() llm.delete_endpoint() ``` ## Conclusion The new Hugging Face LLM Inference DLC enables customers to easily and securely deploy open-source LLMs on Amazon SageMaker. The easy-to-use API and deployment process allows customers to build scalable AI chatbots and virtual assistants with state-of-the-art models like Open Assistant. Overall, this new DLC is going to empower developers and businesses to leverage the latest advances in natural language generation. --- Thanks for reading! If you have any questions, feel free to contact me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/).
huggingface/blog/blob/main/sagemaker-huggingface-llm.md
!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # DETR ## Overview The DETR model was proposed in [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov and Sergey Zagoruyko. DETR consists of a convolutional backbone followed by an encoder-decoder Transformer which can be trained end-to-end for object detection. It greatly simplifies a lot of the complexity of models like Faster-R-CNN and Mask-R-CNN, which use things like region proposals, non-maximum suppression procedure and anchor generation. Moreover, DETR can also be naturally extended to perform panoptic segmentation, by simply adding a mask head on top of the decoder outputs. The abstract from the paper is the following: *We present a new method that views object detection as a direct set prediction problem. Our approach streamlines the detection pipeline, effectively removing the need for many hand-designed components like a non-maximum suppression procedure or anchor generation that explicitly encode our prior knowledge about the task. The main ingredients of the new framework, called DEtection TRansformer or DETR, are a set-based global loss that forces unique predictions via bipartite matching, and a transformer encoder-decoder architecture. Given a fixed small set of learned object queries, DETR reasons about the relations of the objects and the global image context to directly output the final set of predictions in parallel. The new model is conceptually simple and does not require a specialized library, unlike many other modern detectors. DETR demonstrates accuracy and run-time performance on par with the well-established and highly-optimized Faster RCNN baseline on the challenging COCO object detection dataset. Moreover, DETR can be easily generalized to produce panoptic segmentation in a unified manner. We show that it significantly outperforms competitive baselines.* This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/facebookresearch/detr). ## How DETR works Here's a TLDR explaining how [`~transformers.DetrForObjectDetection`] works: First, an image is sent through a pre-trained convolutional backbone (in the paper, the authors use ResNet-50/ResNet-101). Let's assume we also add a batch dimension. This means that the input to the backbone is a tensor of shape `(batch_size, 3, height, width)`, assuming the image has 3 color channels (RGB). The CNN backbone outputs a new lower-resolution feature map, typically of shape `(batch_size, 2048, height/32, width/32)`. This is then projected to match the hidden dimension of the Transformer of DETR, which is `256` by default, using a `nn.Conv2D` layer. So now, we have a tensor of shape `(batch_size, 256, height/32, width/32).` Next, the feature map is flattened and transposed to obtain a tensor of shape `(batch_size, seq_len, d_model)` = `(batch_size, width/32*height/32, 256)`. So a difference with NLP models is that the sequence length is actually longer than usual, but with a smaller `d_model` (which in NLP is typically 768 or higher). Next, this is sent through the encoder, outputting `encoder_hidden_states` of the same shape (you can consider these as image features). Next, so-called **object queries** are sent through the decoder. This is a tensor of shape `(batch_size, num_queries, d_model)`, with `num_queries` typically set to 100 and initialized with zeros. These input embeddings are learnt positional encodings that the authors refer to as object queries, and similarly to the encoder, they are added to the input of each attention layer. Each object query will look for a particular object in the image. The decoder updates these embeddings through multiple self-attention and encoder-decoder attention layers to output `decoder_hidden_states` of the same shape: `(batch_size, num_queries, d_model)`. Next, two heads are added on top for object detection: a linear layer for classifying each object query into one of the objects or "no object", and a MLP to predict bounding boxes for each query. The model is trained using a **bipartite matching loss**: so what we actually do is compare the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The [Hungarian matching algorithm](https://en.wikipedia.org/wiki/Hungarian_algorithm) is used to find an optimal one-to-one mapping of each of the N queries to each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and [generalized IoU loss](https://giou.stanford.edu/) (for the bounding boxes) are used to optimize the parameters of the model. DETR can be naturally extended to perform panoptic segmentation (which unifies semantic segmentation and instance segmentation). [`~transformers.DetrForSegmentation`] adds a segmentation mask head on top of [`~transformers.DetrForObjectDetection`]. The mask head can be trained either jointly, or in a two steps process, where one first trains a [`~transformers.DetrForObjectDetection`] model to detect bounding boxes around both "things" (instances) and "stuff" (background things like trees, roads, sky), then freeze all the weights and train only the mask head for 25 epochs. Experimentally, these two approaches give similar results. Note that predicting boxes is required for the training to be possible, since the Hungarian matching is computed using distances between boxes. ## Usage tips - DETR uses so-called **object queries** to detect objects in an image. The number of queries determines the maximum number of objects that can be detected in a single image, and is set to 100 by default (see parameter `num_queries` of [`~transformers.DetrConfig`]). Note that it's good to have some slack (in COCO, the authors used 100, while the maximum number of objects in a COCO image is ~70). - The decoder of DETR updates the query embeddings in parallel. This is different from language models like GPT-2, which use autoregressive decoding instead of parallel. Hence, no causal attention mask is used. - DETR adds position embeddings to the hidden states at each self-attention and cross-attention layer before projecting to queries and keys. For the position embeddings of the image, one can choose between fixed sinusoidal or learned absolute position embeddings. By default, the parameter `position_embedding_type` of [`~transformers.DetrConfig`] is set to `"sine"`. - During training, the authors of DETR did find it helpful to use auxiliary losses in the decoder, especially to help the model output the correct number of objects of each class. If you set the parameter `auxiliary_loss` of [`~transformers.DetrConfig`] to `True`, then prediction feedforward neural networks and Hungarian losses are added after each decoder layer (with the FFNs sharing parameters). - If you want to train the model in a distributed environment across multiple nodes, then one should update the _num_boxes_ variable in the _DetrLoss_ class of _modeling_detr.py_. When training on multiple nodes, this should be set to the average number of target boxes across all nodes, as can be seen in the original implementation [here](https://github.com/facebookresearch/detr/blob/a54b77800eb8e64e3ad0d8237789fcbf2f8350c5/models/detr.py#L227-L232). - [`~transformers.DetrForObjectDetection`] and [`~transformers.DetrForSegmentation`] can be initialized with any convolutional backbone available in the [timm library](https://github.com/rwightman/pytorch-image-models). Initializing with a MobileNet backbone for example can be done by setting the `backbone` attribute of [`~transformers.DetrConfig`] to `"tf_mobilenetv3_small_075"`, and then initializing the model with that config. - DETR resizes the input images such that the shortest side is at least a certain amount of pixels while the longest is at most 1333 pixels. At training time, scale augmentation is used such that the shortest side is randomly set to at least 480 and at most 800 pixels. At inference time, the shortest side is set to 800. One can use [`~transformers.DetrImageProcessor`] to prepare images (and optional annotations in COCO format) for the model. Due to this resizing, images in a batch can have different sizes. DETR solves this by padding images up to the largest size in a batch, and by creating a pixel mask that indicates which pixels are real/which are padding. Alternatively, one can also define a custom `collate_fn` in order to batch images together, using [`~transformers.DetrImageProcessor.pad_and_create_pixel_mask`]. - The size of the images will determine the amount of memory being used, and will thus determine the `batch_size`. It is advised to use a batch size of 2 per GPU. See [this Github thread](https://github.com/facebookresearch/detr/issues/150) for more info. There are three ways to instantiate a DETR model (depending on what you prefer): Option 1: Instantiate DETR with pre-trained weights for entire model ```py >>> from transformers import DetrForObjectDetection >>> model = DetrForObjectDetection.from_pretrained("facebook/detr-resnet-50") ``` Option 2: Instantiate DETR with randomly initialized weights for Transformer, but pre-trained weights for backbone ```py >>> from transformers import DetrConfig, DetrForObjectDetection >>> config = DetrConfig() >>> model = DetrForObjectDetection(config) ``` Option 3: Instantiate DETR with randomly initialized weights for backbone + Transformer ```py >>> config = DetrConfig(use_pretrained_backbone=False) >>> model = DetrForObjectDetection(config) ``` As a summary, consider the following table: | Task | Object detection | Instance segmentation | Panoptic segmentation | |------|------------------|-----------------------|-----------------------| | **Description** | Predicting bounding boxes and class labels around objects in an image | Predicting masks around objects (i.e. instances) in an image | Predicting masks around both objects (i.e. instances) as well as "stuff" (i.e. background things like trees and roads) in an image | | **Model** | [`~transformers.DetrForObjectDetection`] | [`~transformers.DetrForSegmentation`] | [`~transformers.DetrForSegmentation`] | | **Example dataset** | COCO detection | COCO detection, COCO panoptic | COCO panoptic | | | **Format of annotations to provide to** [`~transformers.DetrImageProcessor`] | {'image_id': `int`, 'annotations': `List[Dict]`} each Dict being a COCO object annotation | {'image_id': `int`, 'annotations': `List[Dict]`} (in case of COCO detection) or {'file_name': `str`, 'image_id': `int`, 'segments_info': `List[Dict]`} (in case of COCO panoptic) | {'file_name': `str`, 'image_id': `int`, 'segments_info': `List[Dict]`} and masks_path (path to directory containing PNG files of the masks) | | **Postprocessing** (i.e. converting the output of the model to Pascal VOC format) | [`~transformers.DetrImageProcessor.post_process`] | [`~transformers.DetrImageProcessor.post_process_segmentation`] | [`~transformers.DetrImageProcessor.post_process_segmentation`], [`~transformers.DetrImageProcessor.post_process_panoptic`] | | **evaluators** | `CocoEvaluator` with `iou_types="bbox"` | `CocoEvaluator` with `iou_types="bbox"` or `"segm"` | `CocoEvaluator` with `iou_tupes="bbox"` or `"segm"`, `PanopticEvaluator` | In short, one should prepare the data either in COCO detection or COCO panoptic format, then use [`~transformers.DetrImageProcessor`] to create `pixel_values`, `pixel_mask` and optional `labels`, which can then be used to train (or fine-tune) a model. For evaluation, one should first convert the outputs of the model using one of the postprocessing methods of [`~transformers.DetrImageProcessor`]. These can be be provided to either `CocoEvaluator` or `PanopticEvaluator`, which allow you to calculate metrics like mean Average Precision (mAP) and Panoptic Quality (PQ). The latter objects are implemented in the [original repository](https://github.com/facebookresearch/detr). See the [example notebooks](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/DETR) for more info regarding evaluation. ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DETR. <PipelineTag pipeline="object-detection"/> - All example notebooks illustrating fine-tuning [`DetrForObjectDetection`] and [`DetrForSegmentation`] on a custom dataset an be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/DETR). - See also: [Object detection task guide](../tasks/object_detection) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## DetrConfig [[autodoc]] DetrConfig ## DetrImageProcessor [[autodoc]] DetrImageProcessor - preprocess - post_process_object_detection - post_process_semantic_segmentation - post_process_instance_segmentation - post_process_panoptic_segmentation ## DetrFeatureExtractor [[autodoc]] DetrFeatureExtractor - __call__ - post_process_object_detection - post_process_semantic_segmentation - post_process_instance_segmentation - post_process_panoptic_segmentation ## DETR specific outputs [[autodoc]] models.detr.modeling_detr.DetrModelOutput [[autodoc]] models.detr.modeling_detr.DetrObjectDetectionOutput [[autodoc]] models.detr.modeling_detr.DetrSegmentationOutput ## DetrModel [[autodoc]] DetrModel - forward ## DetrForObjectDetection [[autodoc]] DetrForObjectDetection - forward ## DetrForSegmentation [[autodoc]] DetrForSegmentation - forward
huggingface/transformers/blob/main/docs/source/en/model_doc/detr.md
Test Strategy Very brief, mildly aspirational test strategy document. This isn't where we are but it is where we want to get to. This document does not detail how to setup an environment or how to run the tests locally nor does it contain any best practices that we try to follow when writing tests, that information exists in the [contributing guide](https://github.com/gradio-app/gradio/blob/main/CONTRIBUTING.md). ## Objectives The purposes of all testing activities on Gradio fit one of the following objectives: 1. Ensure that the Gradio library functions as we expect it to. 2. Enable the maintenance team to quickly identify both the presence and source of defects. 3. Prevent regressions, i.e. if we fix something it should stay fixed. 4. Improve the quality of the codebase in order to ease maintenance efforts. 5. Reduce the amount of manual testing required. ## Scope Testing is always a tradeoff. We can't cover everything unless we want to spend all of our time writing and running tests. We should focus on a few keys areas. We should not focus on code coverage but on test coverage following the below criteria: - The documented Gradio API (that's the bit that users interact with via python) should be tested thoroughly. (1) - Additional gradio elements that are both publicly available and used internally (such as the Python and JS client libraries) should be tested thoroughly. (1) - Additional gradio elements that are publicly available should be tested as thoroughly as is reasonable (this could be things like demos/the gradio CLI/ other tooling). The importance of each individual component, and the appropriate investment of effort, needs to be assessed on a case-by-case basis. (1) - Element boundaries should be tested where there is reasonable cause to do so (e.g. config generation) (1) - Implementation details should only be tested where there is sufficient complexity to warrant it. (1) - Bug fixes should be accompanied by tests wherever is reasonably possible. (3) ## Types of testing Our tests will broadly fall into one of three categories: - Static Quality checks - Dynamic 'Code' tests - Dynamic Functional tests ### Static Quality checks Static quality checks are generally very fast to run and do not require building the code base. They also provide the least value. These tests would be things like linting, typechecking, and formatting. While they offer little in terms of testing functionality they align very closely with objective (4, 5) as they generally help to keep the codebase in good shape and offer very fast feedback. Such check are almost free from an authoring point of view as fixes can be mostly automated (either via scripts or editor integrations). ### Dynamic code tests These tests generally test either isolated pieces of code or test the relationship between parts of the code base. They sometimes test functionality or give indications of working functionality but never offer enough confidence to rely on them solely. These test are usually either unit or integration tests. They are generally pretty quick to write (especially unit tests) and run and offer a moderate amount of confidence. They align closely with Objectives 2 and 3 and a little bit of 1. These kind of tests should probably make up the bulk of our handwritten tests. ### Dynamic functional tests These tests give by far the most confidence as they are testing only the functionality of the software and do so by running the entire software itself, exactly as a user would. This aligns very closely with objective 1 but significantly impacts objective 5, as these tests are costly to both write and run. Despite the value, due to the downside we should try to get as much out of other tests types as we can, reserving functional testing for complex use cases and end-to-end journey. Tests in this category could be browser-based end-to-end tests, accessibility tests, or performance tests. They are sometimes called acceptance tests. ## Testing tools We currently use the following tools: ### Static quality checks - Python type-checking (python) - Black linting (python) - ruff formatting (python) - prettier formatting (javascript/svelte) - TypeScript type-checking (javascript/svelte) - eslint linting (javascript/svelte) [in progress] ### Dynamic code tests - pytest (python unit and integration tests) - vitest (node-based unit and integration tests) - playwright (browser-based unit and integration tests) ### Functional/acceptance tests - playwright (full end to end testing) - chromatic (visual testing) [in progress] - Accessibility testing [to do] ## Supported environments and versions All operating systems refer to the current runner variants supported by GitHub actions. All unspecified version segments (`x`) refer to latest. | Software | Version(s) | Operating System(s) | | -------- | --------------------- | --------------------------------- | | Python | `3.8.x` | `ubuntu-latest`, `windows-latest` | | Node | `18.x.x` | `ubuntu-latest` | | Browser | `playwright-chrome-x` | `ubuntu-latest` | ## Test execution Tests need to be executed in a number of environments and at different stages of the development cycle in order to be useful. The requirements for tests are as follows: - **Locally**: it is important that developers can easily run most tests locally to ensure a passing suite before making a PR. There are some exceptions to this, certain tests may require access to secret values which we cannot make available to all possible contributors for practical security reasons. It is reasonable that it isn't possible to run these tests but they should be disabled by default when running locally. - **CI** - It is _critical_ that all tests run successfully in CI with no exceptions. Not every test is required to pass to satisfy CI checks for practical reasons but it is required that all tests should run in CI and notify us if something unexpected happens in order for the development team to take appropriate action. For instructions on how to write and run tests see the [contributing guide](https://github.com/gradio-app/gradio/blob/main/CONTRIBUTING.md). ## Managing defects As we formalise our testing strategy and bring / keep our test up to standard, it is important that we have some principles on managing defects as they occur/ are reported. For now we can have one very simple rule: - Every bug fix should be accompanied by a test that failed before the fix and passes afterwards. This test should _typically_ be a dynamic code test but it could be a linting rule or new type if that is appropriate. There are always exceptions but we should think very carefully before ignoring this rule.
gradio-app/gradio/blob/main/test-strategy.md
!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Building custom models The 🤗 Transformers library is designed to be easily extensible. Every model is fully coded in a given subfolder of the repository with no abstraction, so you can easily copy a modeling file and tweak it to your needs. If you are writing a brand new model, it might be easier to start from scratch. In this tutorial, we will show you how to write a custom model and its configuration so it can be used inside Transformers, and how you can share it with the community (with the code it relies on) so that anyone can use it, even if it's not present in the 🤗 Transformers library. We'll see how to build upon transformers and extend the framework with your hooks and custom code. We will illustrate all of this on a ResNet model, by wrapping the ResNet class of the [timm library](https://github.com/rwightman/pytorch-image-models) into a [`PreTrainedModel`]. ## Writing a custom configuration Before we dive into the model, let's first write its configuration. The configuration of a model is an object that will contain all the necessary information to build the model. As we will see in the next section, the model can only take a `config` to be initialized, so we really need that object to be as complete as possible. In our example, we will take a couple of arguments of the ResNet class that we might want to tweak. Different configurations will then give us the different types of ResNets that are possible. We then just store those arguments, after checking the validity of a few of them. ```python from transformers import PretrainedConfig from typing import List class ResnetConfig(PretrainedConfig): model_type = "resnet" def __init__( self, block_type="bottleneck", layers: List[int] = [3, 4, 6, 3], num_classes: int = 1000, input_channels: int = 3, cardinality: int = 1, base_width: int = 64, stem_width: int = 64, stem_type: str = "", avg_down: bool = False, **kwargs, ): if block_type not in ["basic", "bottleneck"]: raise ValueError(f"`block_type` must be 'basic' or bottleneck', got {block_type}.") if stem_type not in ["", "deep", "deep-tiered"]: raise ValueError(f"`stem_type` must be '', 'deep' or 'deep-tiered', got {stem_type}.") self.block_type = block_type self.layers = layers self.num_classes = num_classes self.input_channels = input_channels self.cardinality = cardinality self.base_width = base_width self.stem_width = stem_width self.stem_type = stem_type self.avg_down = avg_down super().__init__(**kwargs) ``` The three important things to remember when writing you own configuration are the following: - you have to inherit from `PretrainedConfig`, - the `__init__` of your `PretrainedConfig` must accept any kwargs, - those `kwargs` need to be passed to the superclass `__init__`. The inheritance is to make sure you get all the functionality from the 🤗 Transformers library, while the two other constraints come from the fact a `PretrainedConfig` has more fields than the ones you are setting. When reloading a config with the `from_pretrained` method, those fields need to be accepted by your config and then sent to the superclass. Defining a `model_type` for your configuration (here `model_type="resnet"`) is not mandatory, unless you want to register your model with the auto classes (see last section). With this done, you can easily create and save your configuration like you would do with any other model config of the library. Here is how we can create a resnet50d config and save it: ```py resnet50d_config = ResnetConfig(block_type="bottleneck", stem_width=32, stem_type="deep", avg_down=True) resnet50d_config.save_pretrained("custom-resnet") ``` This will save a file named `config.json` inside the folder `custom-resnet`. You can then reload your config with the `from_pretrained` method: ```py resnet50d_config = ResnetConfig.from_pretrained("custom-resnet") ``` You can also use any other method of the [`PretrainedConfig`] class, like [`~PretrainedConfig.push_to_hub`] to directly upload your config to the Hub. ## Writing a custom model Now that we have our ResNet configuration, we can go on writing the model. We will actually write two: one that extracts the hidden features from a batch of images (like [`BertModel`]) and one that is suitable for image classification (like [`BertForSequenceClassification`]). As we mentioned before, we'll only write a loose wrapper of the model to keep it simple for this example. The only thing we need to do before writing this class is a map between the block types and actual block classes. Then the model is defined from the configuration by passing everything to the `ResNet` class: ```py from transformers import PreTrainedModel from timm.models.resnet import BasicBlock, Bottleneck, ResNet from .configuration_resnet import ResnetConfig BLOCK_MAPPING = {"basic": BasicBlock, "bottleneck": Bottleneck} class ResnetModel(PreTrainedModel): config_class = ResnetConfig def __init__(self, config): super().__init__(config) block_layer = BLOCK_MAPPING[config.block_type] self.model = ResNet( block_layer, config.layers, num_classes=config.num_classes, in_chans=config.input_channels, cardinality=config.cardinality, base_width=config.base_width, stem_width=config.stem_width, stem_type=config.stem_type, avg_down=config.avg_down, ) def forward(self, tensor): return self.model.forward_features(tensor) ``` For the model that will classify images, we just change the forward method: ```py import torch class ResnetModelForImageClassification(PreTrainedModel): config_class = ResnetConfig def __init__(self, config): super().__init__(config) block_layer = BLOCK_MAPPING[config.block_type] self.model = ResNet( block_layer, config.layers, num_classes=config.num_classes, in_chans=config.input_channels, cardinality=config.cardinality, base_width=config.base_width, stem_width=config.stem_width, stem_type=config.stem_type, avg_down=config.avg_down, ) def forward(self, tensor, labels=None): logits = self.model(tensor) if labels is not None: loss = torch.nn.cross_entropy(logits, labels) return {"loss": loss, "logits": logits} return {"logits": logits} ``` In both cases, notice how we inherit from `PreTrainedModel` and call the superclass initialization with the `config` (a bit like when you write a regular `torch.nn.Module`). The line that sets the `config_class` is not mandatory, unless you want to register your model with the auto classes (see last section). <Tip> If your model is very similar to a model inside the library, you can re-use the same configuration as this model. </Tip> You can have your model return anything you want, but returning a dictionary like we did for `ResnetModelForImageClassification`, with the loss included when labels are passed, will make your model directly usable inside the [`Trainer`] class. Using another output format is fine as long as you are planning on using your own training loop or another library for training. Now that we have our model class, let's create one: ```py resnet50d = ResnetModelForImageClassification(resnet50d_config) ``` Again, you can use any of the methods of [`PreTrainedModel`], like [`~PreTrainedModel.save_pretrained`] or [`~PreTrainedModel.push_to_hub`]. We will use the second in the next section, and see how to push the model weights with the code of our model. But first, let's load some pretrained weights inside our model. In your own use case, you will probably be training your custom model on your own data. To go fast for this tutorial, we will use the pretrained version of the resnet50d. Since our model is just a wrapper around it, it's going to be easy to transfer those weights: ```py import timm pretrained_model = timm.create_model("resnet50d", pretrained=True) resnet50d.model.load_state_dict(pretrained_model.state_dict()) ``` Now let's see how to make sure that when we do [`~PreTrainedModel.save_pretrained`] or [`~PreTrainedModel.push_to_hub`], the code of the model is saved. ## Registering a model with custom code to the auto classes If you are writing a library that extends 🤗 Transformers, you may want to extend the auto classes to include your own model. This is different from pushing the code to the Hub in the sense that users will need to import your library to get the custom models (contrarily to automatically downloading the model code from the Hub). As long as your config has a `model_type` attribute that is different from existing model types, and that your model classes have the right `config_class` attributes, you can just add them to the auto classes like this: ```py from transformers import AutoConfig, AutoModel, AutoModelForImageClassification AutoConfig.register("resnet", ResnetConfig) AutoModel.register(ResnetConfig, ResnetModel) AutoModelForImageClassification.register(ResnetConfig, ResnetModelForImageClassification) ``` Note that the first argument used when registering your custom config to [`AutoConfig`] needs to match the `model_type` of your custom config, and the first argument used when registering your custom models to any auto model class needs to match the `config_class` of those models. ## Sending the code to the Hub <Tip warning={true}> This API is experimental and may have some slight breaking changes in the next releases. </Tip> First, make sure your model is fully defined in a `.py` file. It can rely on relative imports to some other files as long as all the files are in the same directory (we don't support submodules for this feature yet). For our example, we'll define a `modeling_resnet.py` file and a `configuration_resnet.py` file in a folder of the current working directory named `resnet_model`. The configuration file contains the code for `ResnetConfig` and the modeling file contains the code of `ResnetModel` and `ResnetModelForImageClassification`. ``` . └── resnet_model ├── __init__.py ├── configuration_resnet.py └── modeling_resnet.py ``` The `__init__.py` can be empty, it's just there so that Python detects `resnet_model` can be use as a module. <Tip warning={true}> If copying a modeling files from the library, you will need to replace all the relative imports at the top of the file to import from the `transformers` package. </Tip> Note that you can re-use (or subclass) an existing configuration/model. To share your model with the community, follow those steps: first import the ResNet model and config from the newly created files: ```py from resnet_model.configuration_resnet import ResnetConfig from resnet_model.modeling_resnet import ResnetModel, ResnetModelForImageClassification ``` Then you have to tell the library you want to copy the code files of those objects when using the `save_pretrained` method and properly register them with a given Auto class (especially for models), just run: ```py ResnetConfig.register_for_auto_class() ResnetModel.register_for_auto_class("AutoModel") ResnetModelForImageClassification.register_for_auto_class("AutoModelForImageClassification") ``` Note that there is no need to specify an auto class for the configuration (there is only one auto class for them, [`AutoConfig`]) but it's different for models. Your custom model could be suitable for many different tasks, so you have to specify which one of the auto classes is the correct one for your model. <Tip> Use `register_for_auto_class()` if you want the code files to be copied. If you instead prefer to use code on the Hub from another repo, you don't need to call it. In cases where there's more than one auto class, you can modify the `config.json` directly using the following structure: ``` "auto_map": { "AutoConfig": "<your-repo-name>--<config-name>", "AutoModel": "<your-repo-name>--<config-name>", "AutoModelFor<Task>": "<your-repo-name>--<config-name>", }, ``` </Tip> Next, let's create the config and models as we did before: ```py resnet50d_config = ResnetConfig(block_type="bottleneck", stem_width=32, stem_type="deep", avg_down=True) resnet50d = ResnetModelForImageClassification(resnet50d_config) pretrained_model = timm.create_model("resnet50d", pretrained=True) resnet50d.model.load_state_dict(pretrained_model.state_dict()) ``` Now to send the model to the Hub, make sure you are logged in. Either run in your terminal: ```bash huggingface-cli login ``` or from a notebook: ```py from huggingface_hub import notebook_login notebook_login() ``` You can then push to your own namespace (or an organization you are a member of) like this: ```py resnet50d.push_to_hub("custom-resnet50d") ``` On top of the modeling weights and the configuration in json format, this also copied the modeling and configuration `.py` files in the folder `custom-resnet50d` and uploaded the result to the Hub. You can check the result in this [model repo](https://huggingface.co/sgugger/custom-resnet50d). See the [sharing tutorial](model_sharing) for more information on the push to Hub method. ## Using a model with custom code You can use any configuration, model or tokenizer with custom code files in its repository with the auto-classes and the `from_pretrained` method. All files and code uploaded to the Hub are scanned for malware (refer to the [Hub security](https://huggingface.co/docs/hub/security#malware-scanning) documentation for more information), but you should still review the model code and author to avoid executing malicious code on your machine. Set `trust_remote_code=True` to use a model with custom code: ```py from transformers import AutoModelForImageClassification model = AutoModelForImageClassification.from_pretrained("sgugger/custom-resnet50d", trust_remote_code=True) ``` It is also strongly encouraged to pass a commit hash as a `revision` to make sure the author of the models did not update the code with some malicious new lines (unless you fully trust the authors of the models). ```py commit_hash = "ed94a7c6247d8aedce4647f00f20de6875b5b292" model = AutoModelForImageClassification.from_pretrained( "sgugger/custom-resnet50d", trust_remote_code=True, revision=commit_hash ) ``` Note that when browsing the commit history of the model repo on the Hub, there is a button to easily copy the commit hash of any commit.
huggingface/transformers/blob/main/docs/source/en/custom_models.md
-- title: RL Reliability emoji: 🤗 colorFrom: blue colorTo: red sdk: gradio sdk_version: 3.19.1 app_file: app.py pinned: false tags: - evaluate - metric description: >- Computes the RL reliability metrics from a set of experiments. There is an `"online"` and `"offline"` configuration for evaluation. --- # Metric Card for RL Reliability ## Metric Description The RL Reliability Metrics library provides a set of metrics for measuring the reliability of reinforcement learning (RL) algorithms. ## How to Use ```python import evaluate import numpy as np rl_reliability = evaluate.load("rl_reliability", "online") results = rl_reliability.compute( timesteps=[np.linspace(0, 2000000, 1000)], rewards=[np.linspace(0, 100, 1000)] ) rl_reliability = evaluate.load("rl_reliability", "offline") results = rl_reliability.compute( timesteps=[np.linspace(0, 2000000, 1000)], rewards=[np.linspace(0, 100, 1000)] ) ``` ### Inputs - **timesteps** *(List[int]): For each run a an list/array with its timesteps.* - **rewards** *(List[float]): For each run a an list/array with its rewards.* KWARGS: - **baseline="default"** *(Union[str, float]) Normalization used for curves. When `"default"` is passed the curves are normalized by their range in the online setting and by the median performance across runs in the offline case. When a float is passed the curves are divided by that value.* - **eval_points=[50000, 150000, ..., 2000000]** *(List[int]) Statistics will be computed at these points* - **freq_thresh=0.01** *(float) Frequency threshold for low-pass filtering.* - **window_size=100000** *(int) Defines a window centered at each eval point.* - **window_size_trimmed=99000** *(int) To handle shortened curves due to differencing* - **alpha=0.05** *(float)The "value at risk" (VaR) cutoff point, a float in the range [0,1].* ### Output Values In `"online"` mode: - HighFreqEnergyWithinRuns: High Frequency across Time (DT) - IqrWithinRuns: IQR across Time (DT) - MadWithinRuns: 'MAD across Time (DT) - StddevWithinRuns: Stddev across Time (DT) - LowerCVaROnDiffs: Lower CVaR on Differences (SRT) - UpperCVaROnDiffs: Upper CVaR on Differences (SRT) - MaxDrawdown: Max Drawdown (LRT) - LowerCVaROnDrawdown: Lower CVaR on Drawdown (LRT) - UpperCVaROnDrawdown: Upper CVaR on Drawdown (LRT) - LowerCVaROnRaw: Lower CVaR on Raw - UpperCVaROnRaw: Upper CVaR on Raw - IqrAcrossRuns: IQR across Runs (DR) - MadAcrossRuns: MAD across Runs (DR) - StddevAcrossRuns: Stddev across Runs (DR) - LowerCVaROnAcross: Lower CVaR across Runs (RR) - UpperCVaROnAcross: Upper CVaR across Runs (RR) - MedianPerfDuringTraining: Median Performance across Runs In `"offline"` mode: - MadAcrossRollouts: MAD across rollouts (DF) - IqrAcrossRollouts: IQR across rollouts (DF) - LowerCVaRAcrossRollouts: Lower CVaR across rollouts (RF) - UpperCVaRAcrossRollouts: Upper CVaR across rollouts (RF) - MedianPerfAcrossRollouts: Median Performance across rollouts ### Examples First get the sample data from the repository: ```bash wget https://storage.googleapis.com/rl-reliability-metrics/data/tf_agents_example_csv_dataset.tgz tar -xvzf tf_agents_example_csv_dataset.tgz ``` Load the sample data: ```python dfs = [pd.read_csv(f"./csv_data/sac_humanoid_{i}_train.csv") for i in range(1, 4)] ``` Compute the metrics: ```python rl_reliability = evaluate.load("rl_reliability", "online") rl_reliability.compute(timesteps=[df["Metrics/EnvironmentSteps"] for df in dfs], rewards=[df["Metrics/AverageReturn"] for df in dfs]) ``` ## Limitations and Bias This implementation of RL reliability metrics does not compute permutation tests to determine whether algorithms are statistically different in their metric values and also does not compute bootstrap confidence intervals on the rankings of the algorithms. See the [original library](https://github.com/google-research/rl-reliability-metrics/) for more resources. ## Citation ```bibtex @conference{rl_reliability_metrics, title = {Measuring the Reliability of Reinforcement Learning Algorithms}, author = {Stephanie CY Chan, Sam Fishman, John Canny, Anoop Korattikara, and Sergio Guadarrama}, booktitle = {International Conference on Learning Representations, Addis Ababa, Ethiopia}, year = 2020, } ``` ## Further References - Homepage: https://github.com/google-research/rl-reliability-metrics
huggingface/evaluate/blob/main/metrics/rl_reliability/README.md
Datasets 🤝 Arrow ## What is Arrow? [Arrow](https://arrow.apache.org/) enables large amounts of data to be processed and moved quickly. It is a specific data format that stores data in a columnar memory layout. This provides several significant advantages: * Arrow's standard format allows [zero-copy reads](https://en.wikipedia.org/wiki/Zero-copy) which removes virtually all serialization overhead. * Arrow is language-agnostic so it supports different programming languages. * Arrow is column-oriented so it is faster at querying and processing slices or columns of data. * Arrow allows for copy-free hand-offs to standard machine learning tools such as NumPy, Pandas, PyTorch, and TensorFlow. * Arrow supports many, possibly nested, column types. ## Memory-mapping 🤗 Datasets uses Arrow for its local caching system. It allows datasets to be backed by an on-disk cache, which is memory-mapped for fast lookup. This architecture allows for large datasets to be used on machines with relatively small device memory. For example, loading the full English Wikipedia dataset only takes a few MB of RAM: ```python >>> import os; import psutil; import timeit >>> from datasets import load_dataset # Process.memory_info is expressed in bytes, so convert to megabytes >>> mem_before = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024) >>> wiki = load_dataset("wikipedia", "20220301.en", split="train") >>> mem_after = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024) >>> print(f"RAM memory used: {(mem_after - mem_before)} MB") RAM memory used: 50 MB ``` This is possible because the Arrow data is actually memory-mapped from disk, and not loaded in memory. Memory-mapping allows access to data on disk, and leverages virtual memory capabilities for fast lookups. ## Performance Iterating over a memory-mapped dataset using Arrow is fast. Iterating over Wikipedia on a laptop gives you speeds of 1-3 Gbit/s: ```python >>> s = """batch_size = 1000 ... for batch in wiki.iter(batch_size): ... ... ... """ >>> elapsed_time = timeit.timeit(stmt=s, number=1, globals=globals()) >>> print(f"Time to iterate over the {wiki.dataset_size >> 30} GB dataset: {elapsed_time:.1f} sec, " ... f"ie. {float(wiki.dataset_size >> 27)/elapsed_time:.1f} Gb/s") Time to iterate over the 18 GB dataset: 31.8 sec, ie. 4.8 Gb/s ```
huggingface/datasets/blob/main/docs/source/about_arrow.md
Your First Docker Space: Text Generation with T5 In the following sections, you'll learn the basics of creating a Docker Space, configuring it, and deploying your code to it. We'll create a **Text Generation** Space with Docker that'll be used to demo the [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) model, which can generate text given some input text, using FastAPI as the server. You can find a completed version of this hosted [here](https://huggingface.co/spaces/DockerTemplates/fastapi_t5). ## Create a new Docker Space We'll start by [creating a brand new Space](https://huggingface.co/new-space) and choosing **Docker** as our SDK. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/huggingface.co_new-space_docker.jpg"/> </div> Hugging Face Spaces are Git repositories, meaning that you can work on your Space incrementally (and collaboratively) by pushing commits. Take a look at the [Getting Started with Repositories](./repositories-getting-started) guide to learn about how you can create and edit files before continuing. If you prefer to work with a UI, you can also do the work directly in the browser. Selecting **Docker** as the SDK when [creating a new Space](https://huggingface.co/new-space) will initialize your Docker Space by setting the `sdk` property to `docker` in your `README.md` file's YAML block. ```yaml sdk: docker ``` You have the option to change the default application port of your Space by setting the `app_port` property in your `README.md` file's YAML block. The default port is `7860`. ```yaml app_port: 7860 ``` ## Add the dependencies For the **Text Generation** Space, we'll be building a FastAPI app that showcases a text generation model called Flan T5. For the model inference, we'll be using a [🤗 Transformers pipeline](https://huggingface.co/docs/transformers/pipeline_tutorial) to use the model. We need to start by installing a few dependencies. This can be done by creating a **requirements.txt** file in our repository, and adding the following dependencies to it: ``` fastapi==0.74.* requests==2.27.* sentencepiece==0.1.* torch==1.11.* transformers==4.* uvicorn[standard]==0.17.* ``` These dependencies will be installed in the Dockerfile we'll create later. ## Create the app Let's kick off the process with a dummy FastAPI app to see that we can get an endpoint working. The first step is to create an app file, in this case, we'll call it `main.py`. ```python from fastapi import FastAPI app = FastAPI() @app.get("/") def read_root(): return {"Hello": "World!"} ``` ## Create the Dockerfile The main step for a Docker Space is creating a Dockerfile. You can read more about Dockerfiles [here](https://docs.docker.com/get-started/). Although we're using FastAPI in this tutorial, Dockerfiles give great flexibility to users allowing you to build a new generation of ML demos. Let's write the Dockerfile for our application ```Dockerfile FROM python:3.9 WORKDIR /code COPY ./requirements.txt /code/requirements.txt RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt COPY . . CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "7860"] ``` When the changes are saved, the Space will rebuild and your demo should be up after a couple of seconds! [Here](https://huggingface.co/spaces/DockerTemplates/fastapi_dummy) is an example result at this point. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/huggingface.co_spaces_docker_dummy.jpg"/> </div> ### Testing locally **Tip for power users (you can skip):** If you're developing locally, this is a good moment in which you can do `docker build` and `docker run` to debug locally, but it's even easier to push the changes to the Hub and see how it looks like! ```bash docker build -t fastapi . docker run -it -p 7860:7860 fastapi ``` If you have [Secrets](spaces-sdks-docker#secret-management) you can use `docker buildx` and pass the secrets as build arguments ```bash export SECRET_EXAMPLE="my_secret_value" docker buildx build --secret id=SECRET_EXAMPLE,env=SECRET_EXAMPLE -t fastapi . ``` and run with `docker run` passing the secrets as environment variables ```bash export SECRET_EXAMPLE="my_secret_value" docker run -it -p 7860:7860 -e SECRET_EXAMPLE=$SECRET_EXAMPLE fastapi ``` ## Adding some ML to our app As mentioned before, the idea is to use a Flan T5 model for text generation. We'll want to add some HTML and CSS for an input field, so let's create a directory called static with `index.html`, `style.css`, and `script.js` files. At this moment, your file structure should look like this: ```bash /static /static/index.html /static/script.js /static/style.css Dockerfile main.py README.md requirements.txt ``` Let's go through all the steps to make this working. We'll skip some of the details of the CSS and HTML. You can find the whole code in the Files and versions tab of the [DockerTemplates/fastapi_t5](https://huggingface.co/spaces/DockerTemplates/fastapi_t5) Space. 1. Write the FastAPI endpoint to do inference We'll use the `pipeline` from `transformers` to load the [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) model. We'll set an endpoint called `infer_t5` that receives and input and outputs the result of the inference call ```python from transformers import pipeline pipe_flan = pipeline("text2text-generation", model="google/flan-t5-small") @app.get("/infer_t5") def t5(input): output = pipe_flan(input) return {"output": output[0]["generated_text"]} ``` 2. Write the `index.html` to have a simple form containing the code of the page. ```html <main> <section id="text-gen"> <h2>Text generation using Flan T5</h2> <p> Model: <a href="https://huggingface.co/google/flan-t5-small" rel="noreferrer" target="_blank" >google/flan-t5-small </a> </p> <form class="text-gen-form"> <label for="text-gen-input">Text prompt</label> <input id="text-gen-input" type="text" value="German: There are many ducks" /> <button id="text-gen-submit">Submit</button> <p class="text-gen-output"></p> </form> </section> </main> ``` 3. In the `app.py` file, mount the static files and show the html file in the root route ```python app.mount("/", StaticFiles(directory="static", html=True), name="static") @app.get("/") def index() -> FileResponse: return FileResponse(path="/app/static/index.html", media_type="text/html") ``` 4. In the `script.js` file, make it handle the request ```javascript const textGenForm = document.querySelector(".text-gen-form"); const translateText = async (text) => { const inferResponse = await fetch(`infer_t5?input=${text}`); const inferJson = await inferResponse.json(); return inferJson.output; }; textGenForm.addEventListener("submit", async (event) => { event.preventDefault(); const textGenInput = document.getElementById("text-gen-input"); const textGenParagraph = document.querySelector(".text-gen-output"); textGenParagraph.textContent = await translateText(textGenInput.value); }); ``` 5. Grant permissions to the right directories As discussed in the [Permissions Section](./spaces-sdks-docker#permissions), the container runs with user ID 1000. That means that the Space might face permission issues. For example, `transformers` downloads and caches the models in the path under the `HUGGINGFACE_HUB_CACHE` path. The easiest way to solve this is to create a user with righ permissions and use it to run the container application. We can do this by adding the following lines to the `Dockerfile`. ```Dockerfile # Set up a new user named "user" with user ID 1000 RUN useradd -m -u 1000 user # Switch to the "user" user USER user # Set home to the user's home directory ENV HOME=/home/user \ PATH=/home/user/.local/bin:$PATH # Set the working directory to the user's home directory WORKDIR $HOME/app # Copy the current directory contents into the container at $HOME/app setting the owner to the user COPY --chown=user . $HOME/app ``` The final `Dockerfile` should look like this: ```Dockerfile FROM python:3.9 WORKDIR /code COPY ./requirements.txt /code/requirements.txt RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt RUN useradd -m -u 1000 user USER user ENV HOME=/home/user \ PATH=/home/user/.local/bin:$PATH WORKDIR $HOME/app COPY --chown=user . $HOME/app CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "7860"] ``` Success! Your app should be working now! Check out [DockerTemplates/fastapi_t5](https://huggingface.co/spaces/DockerTemplates/fastapi_t5) to see the final result. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/huggingface.co_spaces_docker_fastapi_t5.jpg"/> </div> What a journey! Please remember that Docker Spaces give you lots of freedom, so you're not limited to use FastAPI. From a [Go Endpoint](https://huggingface.co/spaces/DockerTemplates/test-docker-go) to a [Shiny App](https://huggingface.co/spaces/DockerTemplates/shiny-with-python), the limit is the moon! Check out [some official examples](./spaces-sdks-docker-examples). You can also upgrade your Space to a GPU if needed 😃 ## Debugging You can debug your Space by checking the **Build** and **Container** logs. Click on the **Open Logs** button to open the modal. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/huggingface.co_spaces_docker_fastapi_t5_3.jpg"/> </div> If everything went well, you will see `Pushing Image` and `Scheduling Space` on the **Build** tab <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/huggingface.co_spaces_docker_fastapi_t5_1.jpg"/> </div> On the **Container** tab, you will see the application status, in this case, `Uvicorn running on http://0.0.0.0:7860` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/huggingface.co_spaces_docker_fastapi_t5_2.jpg"/> </div> ## Read More - [Docker Spaces](spaces-sdks-docker) - [List of Docker Spaces examples](spaces-sdks-docker-examples)
huggingface/hub-docs/blob/main/docs/hub/spaces-sdks-docker-first-demo.md
!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # DPR <div class="flex flex-wrap space-x-1"> <a href="https://huggingface.co/models?filter=dpr"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-dpr-blueviolet"> </a> <a href="https://huggingface.co/spaces/docs-demos/dpr-question_encoder-bert-base-multilingual"> <img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"> </a> </div> ## Overview Dense Passage Retrieval (DPR) is a set of tools and models for state-of-the-art open-domain Q&A research. It was introduced in [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) by Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, Wen-tau Yih. The abstract from the paper is the following: *Open-domain question answering relies on efficient passage retrieval to select candidate contexts, where traditional sparse vector space models, such as TF-IDF or BM25, are the de facto method. In this work, we show that retrieval can be practically implemented using dense representations alone, where embeddings are learned from a small number of questions and passages by a simple dual-encoder framework. When evaluated on a wide range of open-domain QA datasets, our dense retriever outperforms a strong Lucene-BM25 system largely by 9%-19% absolute in terms of top-20 passage retrieval accuracy, and helps our end-to-end QA system establish new state-of-the-art on multiple open-domain QA benchmarks.* This model was contributed by [lhoestq](https://huggingface.co/lhoestq). The original code can be found [here](https://github.com/facebookresearch/DPR). ## Usage tips - DPR consists in three models: * Question encoder: encode questions as vectors * Context encoder: encode contexts as vectors * Reader: extract the answer of the questions inside retrieved contexts, along with a relevance score (high if the inferred span actually answers the question). ## DPRConfig [[autodoc]] DPRConfig ## DPRContextEncoderTokenizer [[autodoc]] DPRContextEncoderTokenizer ## DPRContextEncoderTokenizerFast [[autodoc]] DPRContextEncoderTokenizerFast ## DPRQuestionEncoderTokenizer [[autodoc]] DPRQuestionEncoderTokenizer ## DPRQuestionEncoderTokenizerFast [[autodoc]] DPRQuestionEncoderTokenizerFast ## DPRReaderTokenizer [[autodoc]] DPRReaderTokenizer ## DPRReaderTokenizerFast [[autodoc]] DPRReaderTokenizerFast ## DPR specific outputs [[autodoc]] models.dpr.modeling_dpr.DPRContextEncoderOutput [[autodoc]] models.dpr.modeling_dpr.DPRQuestionEncoderOutput [[autodoc]] models.dpr.modeling_dpr.DPRReaderOutput <frameworkcontent> <pt> ## DPRContextEncoder [[autodoc]] DPRContextEncoder - forward ## DPRQuestionEncoder [[autodoc]] DPRQuestionEncoder - forward ## DPRReader [[autodoc]] DPRReader - forward </pt> <tf> ## TFDPRContextEncoder [[autodoc]] TFDPRContextEncoder - call ## TFDPRQuestionEncoder [[autodoc]] TFDPRQuestionEncoder - call ## TFDPRReader [[autodoc]] TFDPRReader - call </tf> </frameworkcontent>
huggingface/transformers/blob/main/docs/source/en/model_doc/dpr.md
he pipeline function. The pipeline function is the most high-level API of the Transformers library. It regroups together all the steps to go from raw texts to usable predictions. The model used is at the core of a pipeline, but the pipeline also include all the necessary pre-processing (since the model does not expect texts, but numbers) as well as some post-processing to make the output of the model human-readable. Let's look at a first example with the sentiment analysis pipeline. This pipeline performs text classification on a given input, and determines if it's positive or negative. Here, it attributed the positive label on the given text, with a confidence of 95%. You can pass multiple texts to the same pipeline, which will be processed and passed through the model together, as a batch. The output is a list of individual results, in the same order as the input texts. Here we find the same label and score for the first text, and the second text is judged positive with a confidence of 99.99%. The zero-shot classification pipeline is a more general text-classification pipeline: it allows you to provide the labels you want. Here we want to classify our input text along the labels "education", "politics" and "business". The pipeline successfully recognizes it's more about education than the other labels, with a confidence of 84%. Moving on to other tasks, the text generation pipeline will auto-complete a given prompt. The output is generated with a bit of randomness, so it changes each time you call the generator object on a given prompt. Up until now, we have used the pipeline API with the default model associated to each task, but you can use it with any model that has been pretrained or fine-tuned on this task. Going on the model hub (huggingface.co/models), you can filter the available models by task. The default model used in our previous example was gpt2, but there are many more models available, and not just in English! Let's go back to the text generation pipeline and load it with another model, distilgpt2. This is a lighter version of gpt2 created by the Hugging Face team. When applying the pipeline to a given prompt, we can specify several arguments, such as the maximum length of the generated texts, or the number of sentences we want to return (since there is some randomness in the generation). Generating text by guessing the next word in a sentence was the pretraining objective of GPT-2, the fill mask pipeline is the pretraining objective of BERT, which is to guess the value of masked word. In this case, we ask the two most likely values for the missing words (according to the model) and get mathematical or computational as possible answers. Another task Transformers model can perform is to classify each word in the sentence instead of the sentence as a whole. One example of this is Named Entity Recognition, which is the task of identifying entities, such as persons, organizations or locations in a sentence. Here, the model correctly finds the person (Sylvain), the organization (Hugging Face) as well as the location (Brooklyn) inside the input text. The grouped_entities=True argument used is to make the pipeline group together the different words linked to the same entity (such as Hugging and Face here). Another task available with the pipeline API is extractive question answering. Providing a context and a question, the model will identify the span of text in the context containing the answer to the question. Getting short summaries of very long articles is also something the Transformers library can help with, with the summarization pipeline. Finally, the last task supported by the pipeline API is translation. Here we use a French/English model found on the model hub to get the English version of our input text. Here is a brief summary of all the tasks we looked into in this video. Try then out through the inference widgets in the model hub!
huggingface/course/blob/main/subtitles/en/raw/chapter1/03_the-pipeline-function.md
-- title: emoji: 🤗 colorFrom: blue colorTo: red sdk: gradio sdk_version: 3.19.1 app_file: app.py pinned: false tags: - evaluate - metric description: >- CoVal is a coreference evaluation tool for the CoNLL and ARRAU datasets which implements of the common evaluation metrics including MUC [Vilain et al, 1995], B-cubed [Bagga and Baldwin, 1998], CEAFe [Luo et al., 2005], LEA [Moosavi and Strube, 2016] and the averaged CoNLL score (the average of the F1 values of MUC, B-cubed and CEAFe) [Denis and Baldridge, 2009a; Pradhan et al., 2011]. This wrapper of CoVal currently only work with CoNLL line format: The CoNLL format has one word per line with all the annotation for this word in column separated by spaces: Column Type Description 1 Document ID This is a variation on the document filename 2 Part number Some files are divided into multiple parts numbered as 000, 001, 002, ... etc. 3 Word number 4 Word itself This is the token as segmented/tokenized in the Treebank. Initially the *_skel file contain the placeholder [WORD] which gets replaced by the actual token from the Treebank which is part of the OntoNotes release. 5 Part-of-Speech 6 Parse bit This is the bracketed structure broken before the first open parenthesis in the parse, and the word/part-of-speech leaf replaced with a *. The full parse can be created by substituting the asterix with the "([pos] [word])" string (or leaf) and concatenating the items in the rows of that column. 7 Predicate lemma The predicate lemma is mentioned for the rows for which we have semantic role information. All other rows are marked with a "-" 8 Predicate Frameset ID This is the PropBank frameset ID of the predicate in Column 7. 9 Word sense This is the word sense of the word in Column 3. 10 Speaker/Author This is the speaker or author name where available. Mostly in Broadcast Conversation and Web Log data. 11 Named Entities These columns identifies the spans representing various named entities. 12:N Predicate Arguments There is one column each of predicate argument structure information for the predicate mentioned in Column 7. N Coreference Coreference chain information encoded in a parenthesis structure. More informations on the format can be found here (section "*_conll File Format"): http://www.conll.cemantix.org/2012/data.html Details on the evaluation on CoNLL can be found here: https://github.com/ns-moosavi/coval/blob/master/conll/README.md CoVal code was written by @ns-moosavi. Some parts are borrowed from https://github.com/clarkkev/deep-coref/blob/master/evaluation.py The test suite is taken from https://github.com/conll/reference-coreference-scorers/ Mention evaluation and the test suite are added by @andreasvc. Parsing CoNLL files is developed by Leo Born. --- ## Metric description CoVal is a coreference evaluation tool for the [CoNLL](https://huggingface.co/datasets/conll2003) and [ARRAU](https://catalog.ldc.upenn.edu/LDC2013T22) datasets which implements of the common evaluation metrics including MUC [Vilain et al, 1995](https://aclanthology.org/M95-1005.pdf), B-cubed [Bagga and Baldwin, 1998](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.34.2578&rep=rep1&type=pdf), CEAFe [Luo et al., 2005](https://aclanthology.org/H05-1004.pdf), LEA [Moosavi and Strube, 2016](https://aclanthology.org/P16-1060.pdf) and the averaged CoNLL score (the average of the F1 values of MUC, B-cubed and CEAFe). CoVal code was written by [`@ns-moosavi`](https://github.com/ns-moosavi), with some parts borrowed from [Deep Coref](https://github.com/clarkkev/deep-coref/blob/master/evaluation.py). The test suite is taken from the [official CoNLL code](https://github.com/conll/reference-coreference-scorers/), with additions by [`@andreasvc`](https://github.com/andreasvc) and file parsing developed by Leo Born. ## How to use The metric takes two lists of sentences as input: one representing `predictions` and `references`, with the sentences consisting of words in the CoNLL format (see the [Limitations and bias](#Limitations-and-bias) section below for more details on the CoNLL format). ```python from evaluate import load coval = load('coval') words = ['bc/cctv/00/cctv_0005 0 0 Thank VBP (TOP(S(VP* thank 01 1 Xu_li * (V*) * -', ... 'bc/cctv/00/cctv_0005 0 1 you PRP (NP*) - - - Xu_li * (ARG1*) (ARG0*) (116)', ... 'bc/cctv/00/cctv_0005 0 2 everyone NN (NP*) - - - Xu_li * (ARGM-DIS*) * (116)', ... 'bc/cctv/00/cctv_0005 0 3 for IN (PP* - - - Xu_li * (ARG2* * -', ... 'bc/cctv/00/cctv_0005 0 4 watching VBG (S(VP*)))) watch 01 1 Xu_li * *) (V*) -', ... 'bc/cctv/00/cctv_0005 0 5 . . *)) - - - Xu_li * * * -'] references = [words] predictions = [words] results = coval.compute(predictions=predictions, references=references) ``` It also has several optional arguments: `keep_singletons`: After extracting all mentions of key or system file mentions whose corresponding coreference chain is of size one are considered as singletons. The default evaluation mode will include singletons in evaluations if they are included in the key or the system files. By setting `keep_singletons=False`, all singletons in the key and system files will be excluded from the evaluation. `NP_only`: Most of the recent coreference resolvers only resolve NP mentions and leave out the resolution of VPs. By setting the `NP_only` option, the scorer will only evaluate the resolution of NPs. `min_span`: By setting `min_span`, the scorer reports the results based on automatically detected minimum spans. Minimum spans are determined using the [MINA algorithm](https://arxiv.org/pdf/1906.06703.pdf). ## Output values The metric outputs a dictionary with the following key-value pairs: `mentions`: number of mentions, ranges from 0-1 `muc`: MUC metric, which expresses performance in terms of recall and precision, ranging from 0-1. `bcub`: B-cubed metric, which is the averaged precision of all items in the distribution, ranging from 0-1. `ceafe`: CEAFe (Constrained Entity Alignment F-Measure) is computed by aligning reference and system entities with the constraint that a reference entity is aligned with at most one reference entity. It ranges from 0-1 `lea`: LEA is a Link-Based Entity-Aware metric which, for each entity, considers how important the entity is and how well it is resolved. It ranges from 0-1. `conll_score`: averaged CoNLL score (the average of the F1 values of `muc`, `bcub` and `ceafe`), ranging from 0 to 100. ### Values from popular papers Given that many of the metrics returned by COVAL come from different sources, is it hard to cite reference values for all of them. The CoNLL score is used to track progress on different datasets such as the [ARRAU corpus](https://paperswithcode.com/sota/coreference-resolution-on-the-arrau-corpus) and [CoNLL 2012](https://paperswithcode.com/sota/coreference-resolution-on-conll-2012). ## Examples Maximal values ```python from evaluate import load coval = load('coval') words = ['bc/cctv/00/cctv_0005 0 0 Thank VBP (TOP(S(VP* thank 01 1 Xu_li * (V*) * -', ... 'bc/cctv/00/cctv_0005 0 1 you PRP (NP*) - - - Xu_li * (ARG1*) (ARG0*) (116)', ... 'bc/cctv/00/cctv_0005 0 2 everyone NN (NP*) - - - Xu_li * (ARGM-DIS*) * (116)', ... 'bc/cctv/00/cctv_0005 0 3 for IN (PP* - - - Xu_li * (ARG2* * -', ... 'bc/cctv/00/cctv_0005 0 4 watching VBG (S(VP*)))) watch 01 1 Xu_li * *) (V*) -', ... 'bc/cctv/00/cctv_0005 0 5 . . *)) - - - Xu_li * * * -'] references = [words] predictions = [words] results = coval.compute(predictions=predictions, references=references) print(results) {'mentions/recall': 1.0, 'mentions/precision': 1.0, 'mentions/f1': 1.0, 'muc/recall': 1.0, 'muc/precision': 1.0, 'muc/f1': 1.0, 'bcub/recall': 1.0, 'bcub/precision': 1.0, 'bcub/f1': 1.0, 'ceafe/recall': 1.0, 'ceafe/precision': 1.0, 'ceafe/f1': 1.0, 'lea/recall': 1.0, 'lea/precision': 1.0, 'lea/f1': 1.0, 'conll_score': 100.0} ``` ## Limitations and bias This wrapper of CoVal currently only works with [CoNLL line format](https://huggingface.co/datasets/conll2003), which has one word per line with all the annotation for this word in column separated by spaces: | Column | Type | Description | |:-------|:----------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 1 | Document ID | This is a variation on the document filename | | 2 | Part number | Some files are divided into multiple parts numbered as 000, 001, 002, ... etc. | | 3 | Word number | | | 4 | Word | This is the token as segmented/tokenized in the Treebank. Initially the *_skel file contain the placeholder [WORD] which gets replaced by the actual token from the Treebank which is part of the OntoNotes release. | | 5 | Part-of-Speech | | | 6 | Parse bit | This is the bracketed structure broken before the first open parenthesis in the parse, and the word/part-of-speech leaf replaced with a *. The full parse can be created by substituting the asterix with the "([pos] [word])" string (or leaf) and concatenating the items in the rows of that column. | | 7 | Predicate lemma | The predicate lemma is mentioned for the rows for which we have semantic role information. All other rows are marked with a "-". | | 8 | Predicate Frameset ID | This is the PropBank frameset ID of the predicate in Column 7. | | 9 | Word sense | This is the word sense of the word in Column 3. | | 10 | Speaker/Author | This is the speaker or author name where available. Mostly in Broadcast Conversation and Web Log data. | | 11 | Named Entities | These columns identifies the spans representing various named entities. | | 12:N | Predicate Arguments | There is one column each of predicate argument structure information for the predicate mentioned in Column 7. | | N | Coreference | Coreference chain information encoded in a parenthesis structure. | ## Citations ```bibtex @InProceedings{moosavi2019minimum, author = { Nafise Sadat Moosavi, Leo Born, Massimo Poesio and Michael Strube}, title = {Using Automatically Extracted Minimum Spans to Disentangle Coreference Evaluation from Boundary Detection}, year = {2019}, booktitle = {Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)}, publisher = {Association for Computational Linguistics}, address = {Florence, Italy}, } ``` ```bibtex @inproceedings{10.3115/1072399.1072405, author = {Vilain, Marc and Burger, John and Aberdeen, John and Connolly, Dennis and Hirschman, Lynette}, title = {A Model-Theoretic Coreference Scoring Scheme}, year = {1995}, isbn = {1558604022}, publisher = {Association for Computational Linguistics}, address = {USA}, url = {https://doi.org/10.3115/1072399.1072405}, doi = {10.3115/1072399.1072405}, booktitle = {Proceedings of the 6th Conference on Message Understanding}, pages = {45–52}, numpages = {8}, location = {Columbia, Maryland}, series = {MUC6 ’95} } ``` ```bibtex @INPROCEEDINGS{Bagga98algorithmsfor, author = {Amit Bagga and Breck Baldwin}, title = {Algorithms for Scoring Coreference Chains}, booktitle = {In The First International Conference on Language Resources and Evaluation Workshop on Linguistics Coreference}, year = {1998}, pages = {563--566} } ``` ```bibtex @INPROCEEDINGS{Luo05oncoreference, author = {Xiaoqiang Luo}, title = {On coreference resolution performance metrics}, booktitle = {In Proc. of HLT/EMNLP}, year = {2005}, pages = {25--32}, publisher = {URL} } ``` ```bibtex @inproceedings{moosavi-strube-2016-coreference, title = "Which Coreference Evaluation Metric Do You Trust? A Proposal for a Link-based Entity Aware Metric", author = "Moosavi, Nafise Sadat and Strube, Michael", booktitle = "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2016", address = "Berlin, Germany", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/P16-1060", doi = "10.18653/v1/P16-1060", pages = "632--642", } ``` ## Further References - [CoNLL 2012 Task Description](http://www.conll.cemantix.org/2012/data.html): for information on the format (section "*_conll File Format") - [CoNLL Evaluation details](https://github.com/ns-moosavi/coval/blob/master/conll/README.md) - [Hugging Face - Neural Coreference Resolution (Neuralcoref)](https://huggingface.co/coref/)
huggingface/evaluate/blob/main/metrics/coval/README.md
🤗 Datasets Server Datasets Server is a lightweight web API for visualizing and exploring all types of datasets - computer vision, speech, text, and tabular - stored on the Hugging Face [Hub](https://huggingface.co/datasets). The main feature of the Datasets Server is to auto-convert all the [Hub datasets](https://huggingface.co/datasets) to [Parquet](https://parquet.apache.org/). Read more in the [Parquet section](./parquet). As datasets increase in size and data type richness, the cost of preprocessing (storage and compute) these datasets can be challenging and time-consuming. To help users access these modern datasets, Datasets Server runs a server behind the scenes to generate the API responses ahead of time and stores them in a database so they are instantly returned when you make a query through the API. Let Datasets Server take care of the heavy lifting so you can use a simple **REST API** on any of the **30,000+ datasets on Hugging Face** to: - List the **dataset splits, column names and data types** - Get the **dataset size** (in number of rows or bytes) - Download and view **rows at any index** in the dataset - **Search** a word in the dataset - Get insightful **statistics** about the data - Access the dataset as **parquet files** to use in your favorite **processing or analytics framework** <div class="flex justify-center"> <img style="margin-bottom: 0;" class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets-server/openbookqa_light.png" /> <img style="margin-bottom: 0;" class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets-server/openbookqa_dark.png" /> </div> <p style="text-align: center; font-style: italic; margin-top: 0;"> Dataset viewer of the <a href="https://huggingface.co/datasets/openbookqa" rel="nofollow"> OpenBookQA dataset </a> </p> Join the growing community on the [forum](https://discuss.huggingface.co/) or [Discord](https://discord.com/invite/JfAtkvEtRb) today, and give the [Datasets Server repository](https://github.com/huggingface/datasets-server) a ⭐️ if you're interested in the latest updates!
huggingface/datasets-server/blob/main/docs/source/index.mdx
`tokenizers-android-arm64` This is the **aarch64-linux-android** binary for `tokenizers`
huggingface/tokenizers/blob/main/bindings/node/npm/android-arm64/README.md
Gradio Demo: audio_debugger ``` !pip install -q gradio ``` ``` # Downloading files from the demo repo import os !wget -q https://github.com/gradio-app/gradio/raw/main/demo/audio_debugger/cantina.wav ``` ``` import gradio as gr import subprocess import os audio_file = os.path.join(os.path.abspath(''), "cantina.wav") with gr.Blocks() as demo: with gr.Tab("Audio"): gr.Audio(audio_file) with gr.Tab("Interface"): gr.Interface(lambda x:x, "audio", "audio", examples=[audio_file], cache_examples=True) with gr.Tab("console"): ip = gr.Textbox(label="User IP Address") gr.Interface(lambda cmd:subprocess.run([cmd], capture_output=True, shell=True).stdout.decode('utf-8').strip(), "text", "text") def get_ip(request: gr.Request): return request.client.host demo.load(get_ip, None, ip) if __name__ == "__main__": demo.queue() demo.launch() ```
gradio-app/gradio/blob/main/demo/audio_debugger/run.ipynb
!--⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Mixins & serialization methods ## Mixins The `huggingface_hub` library offers a range of mixins that can be used as a parent class for your objects, in order to provide simple uploading and downloading functions. Check out our [integration guide](../guides/integrations) to learn how to integrate any ML framework with the Hub. ### Generic [[autodoc]] ModelHubMixin - all - _save_pretrained - _from_pretrained ### PyTorch [[autodoc]] PyTorchModelHubMixin ### Keras [[autodoc]] KerasModelHubMixin [[autodoc]] from_pretrained_keras [[autodoc]] push_to_hub_keras [[autodoc]] save_pretrained_keras ### Fastai [[autodoc]] from_pretrained_fastai [[autodoc]] push_to_hub_fastai
huggingface/huggingface_hub/blob/main/docs/source/en/package_reference/mixins.md
!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # Text-to-image <Tip warning={true}> The text-to-image script is experimental, and it's easy to overfit and run into issues like catastrophic forgetting. Try exploring different hyperparameters to get the best results on your dataset. </Tip> Text-to-image models like Stable Diffusion are conditioned to generate images given a text prompt. Training a model can be taxing on your hardware, but if you enable `gradient_checkpointing` and `mixed_precision`, it is possible to train a model on a single 24GB GPU. If you're training with larger batch sizes or want to train faster, it's better to use GPUs with more than 30GB of memory. You can reduce your memory footprint by enabling memory-efficient attention with [xFormers](../optimization/xformers). JAX/Flax training is also supported for efficient training on TPUs and GPUs, but it doesn't support gradient checkpointing, gradient accumulation or xFormers. A GPU with at least 30GB of memory or a TPU v3 is recommended for training with Flax. This guide will explore the [train_text_to_image.py](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image.py) training script to help you become familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: ```bash git clone https://github.com/huggingface/diffusers cd diffusers pip install . ``` Then navigate to the example folder containing the training script and install the required dependencies for the script you're using: <hfoptions id="installation"> <hfoption id="PyTorch"> ```bash cd examples/text_to_image pip install -r requirements.txt ``` </hfoption> <hfoption id="Flax"> ```bash cd examples/text_to_image pip install -r requirements_flax.txt ``` </hfoption> </hfoptions> <Tip> 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It'll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate [Quick tour](https://huggingface.co/docs/accelerate/quicktour) to learn more. </Tip> Initialize an 🤗 Accelerate environment: ```bash accelerate config ``` To setup a default 🤗 Accelerate environment without choosing any configurations: ```bash accelerate config default ``` Or if your environment doesn't support an interactive shell, like a notebook, you can use: ```bash from accelerate.utils import write_basic_config write_basic_config() ``` Lastly, if you want to train a model on your own dataset, take a look at the [Create a dataset for training](create_dataset) guide to learn how to create a dataset that works with the training script. ## Script parameters <Tip> The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn't cover every aspect of the script in detail. If you're interested in learning more, feel free to read through the [script](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image.py) and let us know if you have any questions or concerns. </Tip> The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the [`parse_args()`](https://github.com/huggingface/diffusers/blob/8959c5b9dec1c94d6ba482c94a58d2215c5fd026/examples/text_to_image/train_text_to_image.py#L193) function. This function provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you'd like. For example, to speedup training with mixed precision using the fp16 format, add the `--mixed_precision` parameter to the training command: ```bash accelerate launch train_text_to_image.py \ --mixed_precision="fp16" ``` Some basic and important parameters include: - `--pretrained_model_name_or_path`: the name of the model on the Hub or a local path to the pretrained model - `--dataset_name`: the name of the dataset on the Hub or a local path to the dataset to train on - `--image_column`: the name of the image column in the dataset to train on - `--caption_column`: the name of the text column in the dataset to train on - `--output_dir`: where to save the trained model - `--push_to_hub`: whether to push the trained model to the Hub - `--checkpointing_steps`: frequency of saving a checkpoint as the model trains; this is useful if for some reason training is interrupted, you can continue training from that checkpoint by adding `--resume_from_checkpoint` to your training command ### Min-SNR weighting The [Min-SNR](https://huggingface.co/papers/2303.09556) weighting strategy can help with training by rebalancing the loss to achieve faster convergence. The training script supports predicting `epsilon` (noise) or `v_prediction`, but Min-SNR is compatible with both prediction types. This weighting strategy is only supported by PyTorch and is unavailable in the Flax training script. Add the `--snr_gamma` parameter and set it to the recommended value of 5.0: ```bash accelerate launch train_text_to_image.py \ --snr_gamma=5.0 ``` You can compare the loss surfaces for different `snr_gamma` values in this [Weights and Biases](https://wandb.ai/sayakpaul/text2image-finetune-minsnr) report. For smaller datasets, the effects of Min-SNR may not be as obvious compared to larger datasets. ## Training script The dataset preprocessing code and training loop are found in the [`main()`](https://github.com/huggingface/diffusers/blob/8959c5b9dec1c94d6ba482c94a58d2215c5fd026/examples/text_to_image/train_text_to_image.py#L490) function. If you need to adapt the training script, this is where you'll need to make your changes. The `train_text_to_image` script starts by [loading a scheduler](https://github.com/huggingface/diffusers/blob/8959c5b9dec1c94d6ba482c94a58d2215c5fd026/examples/text_to_image/train_text_to_image.py#L543) and tokenizer. You can choose to use a different scheduler here if you want: ```py noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") tokenizer = CLIPTokenizer.from_pretrained( args.pretrained_model_name_or_path, subfolder="tokenizer", revision=args.revision ) ``` Then the script [loads the UNet](https://github.com/huggingface/diffusers/blob/8959c5b9dec1c94d6ba482c94a58d2215c5fd026/examples/text_to_image/train_text_to_image.py#L619) model: ```py load_model = UNet2DConditionModel.from_pretrained(input_dir, subfolder="unet") model.register_to_config(**load_model.config) model.load_state_dict(load_model.state_dict()) ``` Next, the text and image columns of the dataset need to be preprocessed. The [`tokenize_captions`](https://github.com/huggingface/diffusers/blob/8959c5b9dec1c94d6ba482c94a58d2215c5fd026/examples/text_to_image/train_text_to_image.py#L724) function handles tokenizing the inputs, and the [`train_transforms`](https://github.com/huggingface/diffusers/blob/8959c5b9dec1c94d6ba482c94a58d2215c5fd026/examples/text_to_image/train_text_to_image.py#L742) function specifies the type of transforms to apply to the image. Both of these functions are bundled into `preprocess_train`: ```py def preprocess_train(examples): images = [image.convert("RGB") for image in examples[image_column]] examples["pixel_values"] = [train_transforms(image) for image in images] examples["input_ids"] = tokenize_captions(examples) return examples ``` Lastly, the [training loop](https://github.com/huggingface/diffusers/blob/8959c5b9dec1c94d6ba482c94a58d2215c5fd026/examples/text_to_image/train_text_to_image.py#L878) handles everything else. It encodes images into latent space, adds noise to the latents, computes the text embeddings to condition on, updates the model parameters, and saves and pushes the model to the Hub. If you want to learn more about how the training loop works, check out the [Understanding pipelines, models and schedulers](../using-diffusers/write_own_pipeline) tutorial which breaks down the basic pattern of the denoising process. ## Launch the script Once you've made all your changes or you're okay with the default configuration, you're ready to launch the training script! 🚀 <hfoptions id="training-inference"> <hfoption id="PyTorch"> Let's train on the [Pokémon BLIP captions](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions) dataset to generate your own Pokémon. Set the environment variables `MODEL_NAME` and `dataset_name` to the model and the dataset (either from the Hub or a local path). If you're training on more than one GPU, add the `--multi_gpu` parameter to the `accelerate launch` command. <Tip> To train on a local dataset, set the `TRAIN_DIR` and `OUTPUT_DIR` environment variables to the path of the dataset and where to save the model to. </Tip> ```bash export MODEL_NAME="runwayml/stable-diffusion-v1-5" export dataset_name="lambdalabs/pokemon-blip-captions" accelerate launch --mixed_precision="fp16" train_text_to_image.py \ --pretrained_model_name_or_path=$MODEL_NAME \ --dataset_name=$dataset_name \ --use_ema \ --resolution=512 --center_crop --random_flip \ --train_batch_size=1 \ --gradient_accumulation_steps=4 \ --gradient_checkpointing \ --max_train_steps=15000 \ --learning_rate=1e-05 \ --max_grad_norm=1 \ --enable_xformers_memory_efficient_attention --lr_scheduler="constant" --lr_warmup_steps=0 \ --output_dir="sd-pokemon-model" \ --push_to_hub ``` </hfoption> <hfoption id="Flax"> Training with Flax can be faster on TPUs and GPUs thanks to [@duongna211](https://github.com/duongna21). Flax is more efficient on a TPU, but GPU performance is also great. Set the environment variables `MODEL_NAME` and `dataset_name` to the model and the dataset (either from the Hub or a local path). <Tip> To train on a local dataset, set the `TRAIN_DIR` and `OUTPUT_DIR` environment variables to the path of the dataset and where to save the model to. </Tip> ```bash export MODEL_NAME="runwayml/stable-diffusion-v1-5" export dataset_name="lambdalabs/pokemon-blip-captions" python train_text_to_image_flax.py \ --pretrained_model_name_or_path=$MODEL_NAME \ --dataset_name=$dataset_name \ --resolution=512 --center_crop --random_flip \ --train_batch_size=1 \ --max_train_steps=15000 \ --learning_rate=1e-05 \ --max_grad_norm=1 \ --output_dir="sd-pokemon-model" \ --push_to_hub ``` </hfoption> </hfoptions> Once training is complete, you can use your newly trained model for inference: <hfoptions id="training-inference"> <hfoption id="PyTorch"> ```py from diffusers import StableDiffusionPipeline import torch pipeline = StableDiffusionPipeline.from_pretrained("path/to/saved_model", torch_dtype=torch.float16, use_safetensors=True).to("cuda") image = pipeline(prompt="yoda").images[0] image.save("yoda-pokemon.png") ``` </hfoption> <hfoption id="Flax"> ```py import jax import numpy as np from flax.jax_utils import replicate from flax.training.common_utils import shard from diffusers import FlaxStableDiffusionPipeline pipeline, params = FlaxStableDiffusionPipeline.from_pretrained("path/to/saved_model", dtype=jax.numpy.bfloat16) prompt = "yoda pokemon" prng_seed = jax.random.PRNGKey(0) num_inference_steps = 50 num_samples = jax.device_count() prompt = num_samples * [prompt] prompt_ids = pipeline.prepare_inputs(prompt) # shard inputs and rng params = replicate(params) prng_seed = jax.random.split(prng_seed, jax.device_count()) prompt_ids = shard(prompt_ids) images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:]))) image.save("yoda-pokemon.png") ``` </hfoption> </hfoptions> ## Next steps Congratulations on training your own text-to-image model! To learn more about how to use your new model, the following guides may be helpful: - Learn how to [load LoRA weights](../using-diffusers/loading_adapters#LoRA) for inference if you trained your model with LoRA. - Learn more about how certain parameters like guidance scale or techniques such as prompt weighting can help you control inference in the [Text-to-image](../using-diffusers/conditional_image_generation) task guide.
huggingface/diffusers/blob/main/docs/source/en/training/text2image.md
Sequence-to-sequence models[sequence-to-sequence-models] <CourseFloatingBanner chapter={1} classNames="absolute z-10 right-0 top-0" /> <Youtube id="0_4KEb08xrE" /> Encoder-decoder models (also called *sequence-to-sequence models*) use both parts of the Transformer architecture. At each stage, the attention layers of the encoder can access all the words in the initial sentence, whereas the attention layers of the decoder can only access the words positioned before a given word in the input. The pretraining of these models can be done using the objectives of encoder or decoder models, but usually involves something a bit more complex. For instance, [T5](https://huggingface.co/t5-base) is pretrained by replacing random spans of text (that can contain several words) with a single mask special word, and the objective is then to predict the text that this mask word replaces. Sequence-to-sequence models are best suited for tasks revolving around generating new sentences depending on a given input, such as summarization, translation, or generative question answering. Representatives of this family of models include: - [BART](https://huggingface.co/transformers/model_doc/bart.html) - [mBART](https://huggingface.co/transformers/model_doc/mbart.html) - [Marian](https://huggingface.co/transformers/model_doc/marian.html) - [T5](https://huggingface.co/transformers/model_doc/t5.html)
huggingface/course/blob/main/chapters/en/chapter1/7.mdx
!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Summarization [[open-in-colab]] <Youtube id="yHnr5Dk2zCI"/> Summarization creates a shorter version of a document or an article that captures all the important information. Along with translation, it is another example of a task that can be formulated as a sequence-to-sequence task. Summarization can be: - Extractive: extract the most relevant information from a document. - Abstractive: generate new text that captures the most relevant information. This guide will show you how to: 1. Finetune [T5](https://huggingface.co/t5-small) on the California state bill subset of the [BillSum](https://huggingface.co/datasets/billsum) dataset for abstractive summarization. 2. Use your finetuned model for inference. <Tip> The task illustrated in this tutorial is supported by the following model architectures: <!--This tip is automatically generated by `make fix-copies`, do not fill manually!--> [BART](../model_doc/bart), [BigBird-Pegasus](../model_doc/bigbird_pegasus), [Blenderbot](../model_doc/blenderbot), [BlenderbotSmall](../model_doc/blenderbot-small), [Encoder decoder](../model_doc/encoder-decoder), [FairSeq Machine-Translation](../model_doc/fsmt), [GPTSAN-japanese](../model_doc/gptsan-japanese), [LED](../model_doc/led), [LongT5](../model_doc/longt5), [M2M100](../model_doc/m2m_100), [Marian](../model_doc/marian), [mBART](../model_doc/mbart), [MT5](../model_doc/mt5), [MVP](../model_doc/mvp), [NLLB](../model_doc/nllb), [NLLB-MOE](../model_doc/nllb-moe), [Pegasus](../model_doc/pegasus), [PEGASUS-X](../model_doc/pegasus_x), [PLBart](../model_doc/plbart), [ProphetNet](../model_doc/prophetnet), [SeamlessM4T](../model_doc/seamless_m4t), [SeamlessM4Tv2](../model_doc/seamless_m4t_v2), [SwitchTransformers](../model_doc/switch_transformers), [T5](../model_doc/t5), [UMT5](../model_doc/umt5), [XLM-ProphetNet](../model_doc/xlm-prophetnet) <!--End of the generated tip--> </Tip> Before you begin, make sure you have all the necessary libraries installed: ```bash pip install transformers datasets evaluate rouge_score ``` We encourage you to login to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to login: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## Load BillSum dataset Start by loading the smaller California state bill subset of the BillSum dataset from the 🤗 Datasets library: ```py >>> from datasets import load_dataset >>> billsum = load_dataset("billsum", split="ca_test") ``` Split the dataset into a train and test set with the [`~datasets.Dataset.train_test_split`] method: ```py >>> billsum = billsum.train_test_split(test_size=0.2) ``` Then take a look at an example: ```py >>> billsum["train"][0] {'summary': 'Existing law authorizes state agencies to enter into contracts for the acquisition of goods or services upon approval by the Department of General Services. Existing law sets forth various requirements and prohibitions for those contracts, including, but not limited to, a prohibition on entering into contracts for the acquisition of goods or services of $100,000 or more with a contractor that discriminates between spouses and domestic partners or same-sex and different-sex couples in the provision of benefits. Existing law provides that a contract entered into in violation of those requirements and prohibitions is void and authorizes the state or any person acting on behalf of the state to bring a civil action seeking a determination that a contract is in violation and therefore void. Under existing law, a willful violation of those requirements and prohibitions is a misdemeanor.\nThis bill would also prohibit a state agency from entering into contracts for the acquisition of goods or services of $100,000 or more with a contractor that discriminates between employees on the basis of gender identity in the provision of benefits, as specified. By expanding the scope of a crime, this bill would impose a state-mandated local program.\nThe California Constitution requires the state to reimburse local agencies and school districts for certain costs mandated by the state. Statutory provisions establish procedures for making that reimbursement.\nThis bill would provide that no reimbursement is required by this act for a specified reason.', 'text': 'The people of the State of California do enact as follows:\n\n\nSECTION 1.\nSection 10295.35 is added to the Public Contract Code, to read:\n10295.35.\n(a) (1) Notwithstanding any other law, a state agency shall not enter into any contract for the acquisition of goods or services in the amount of one hundred thousand dollars ($100,000) or more with a contractor that, in the provision of benefits, discriminates between employees on the basis of an employee’s or dependent’s actual or perceived gender identity, including, but not limited to, the employee’s or dependent’s identification as transgender.\n(2) For purposes of this section, “contract” includes contracts with a cumulative amount of one hundred thousand dollars ($100,000) or more per contractor in each fiscal year.\n(3) For purposes of this section, an employee health plan is discriminatory if the plan is not consistent with Section 1365.5 of the Health and Safety Code and Section 10140 of the Insurance Code.\n(4) The requirements of this section shall apply only to those portions of a contractor’s operations that occur under any of the following conditions:\n(A) Within the state.\n(B) On real property outside the state if the property is owned by the state or if the state has a right to occupy the property, and if the contractor’s presence at that location is connected to a contract with the state.\n(C) Elsewhere in the United States where work related to a state contract is being performed.\n(b) Contractors shall treat as confidential, to the maximum extent allowed by law or by the requirement of the contractor’s insurance provider, any request by an employee or applicant for employment benefits or any documentation of eligibility for benefits submitted by an employee or applicant for employment.\n(c) After taking all reasonable measures to find a contractor that complies with this section, as determined by the state agency, the requirements of this section may be waived under any of the following circumstances:\n(1) There is only one prospective contractor willing to enter into a specific contract with the state agency.\n(2) The contract is necessary to respond to an emergency, as determined by the state agency, that endangers the public health, welfare, or safety, or the contract is necessary for the provision of essential services, and no entity that complies with the requirements of this section capable of responding to the emergency is immediately available.\n(3) The requirements of this section violate, or are inconsistent with, the terms or conditions of a grant, subvention, or agreement, if the agency has made a good faith attempt to change the terms or conditions of any grant, subvention, or agreement to authorize application of this section.\n(4) The contractor is providing wholesale or bulk water, power, or natural gas, the conveyance or transmission of the same, or ancillary services, as required for ensuring reliable services in accordance with good utility practice, if the purchase of the same cannot practically be accomplished through the standard competitive bidding procedures and the contractor is not providing direct retail services to end users.\n(d) (1) A contractor shall not be deemed to discriminate in the provision of benefits if the contractor, in providing the benefits, pays the actual costs incurred in obtaining the benefit.\n(2) If a contractor is unable to provide a certain benefit, despite taking reasonable measures to do so, the contractor shall not be deemed to discriminate in the provision of benefits.\n(e) (1) Every contract subject to this chapter shall contain a statement by which the contractor certifies that the contractor is in compliance with this section.\n(2) The department or other contracting agency shall enforce this section pursuant to its existing enforcement powers.\n(3) (A) If a contractor falsely certifies that it is in compliance with this section, the contract with that contractor shall be subject to Article 9 (commencing with Section 10420), unless, within a time period specified by the department or other contracting agency, the contractor provides to the department or agency proof that it has complied, or is in the process of complying, with this section.\n(B) The application of the remedies or penalties contained in Article 9 (commencing with Section 10420) to a contract subject to this chapter shall not preclude the application of any existing remedies otherwise available to the department or other contracting agency under its existing enforcement powers.\n(f) Nothing in this section is intended to regulate the contracting practices of any local jurisdiction.\n(g) This section shall be construed so as not to conflict with applicable federal laws, rules, or regulations. In the event that a court or agency of competent jurisdiction holds that federal law, rule, or regulation invalidates any clause, sentence, paragraph, or section of this code or the application thereof to any person or circumstances, it is the intent of the state that the court or agency sever that clause, sentence, paragraph, or section so that the remainder of this section shall remain in effect.\nSEC. 2.\nSection 10295.35 of the Public Contract Code shall not be construed to create any new enforcement authority or responsibility in the Department of General Services or any other contracting agency.\nSEC. 3.\nNo reimbursement is required by this act pursuant to Section 6 of Article XIII\u2009B of the California Constitution because the only costs that may be incurred by a local agency or school district will be incurred because this act creates a new crime or infraction, eliminates a crime or infraction, or changes the penalty for a crime or infraction, within the meaning of Section 17556 of the Government Code, or changes the definition of a crime within the meaning of Section 6 of Article XIII\u2009B of the California Constitution.', 'title': 'An act to add Section 10295.35 to the Public Contract Code, relating to public contracts.'} ``` There are two fields that you'll want to use: - `text`: the text of the bill which'll be the input to the model. - `summary`: a condensed version of `text` which'll be the model target. ## Preprocess The next step is to load a T5 tokenizer to process `text` and `summary`: ```py >>> from transformers import AutoTokenizer >>> checkpoint = "t5-small" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) ``` The preprocessing function you want to create needs to: 1. Prefix the input with a prompt so T5 knows this is a summarization task. Some models capable of multiple NLP tasks require prompting for specific tasks. 2. Use the keyword `text_target` argument when tokenizing labels. 3. Truncate sequences to be no longer than the maximum length set by the `max_length` parameter. ```py >>> prefix = "summarize: " >>> def preprocess_function(examples): ... inputs = [prefix + doc for doc in examples["text"]] ... model_inputs = tokenizer(inputs, max_length=1024, truncation=True) ... labels = tokenizer(text_target=examples["summary"], max_length=128, truncation=True) ... model_inputs["labels"] = labels["input_ids"] ... return model_inputs ``` To apply the preprocessing function over the entire dataset, use 🤗 Datasets [`~datasets.Dataset.map`] method. You can speed up the `map` function by setting `batched=True` to process multiple elements of the dataset at once: ```py >>> tokenized_billsum = billsum.map(preprocess_function, batched=True) ``` Now create a batch of examples using [`DataCollatorForSeq2Seq`]. It's more efficient to *dynamically pad* the sentences to the longest length in a batch during collation, instead of padding the whole dataset to the maximum length. <frameworkcontent> <pt> ```py >>> from transformers import DataCollatorForSeq2Seq >>> data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=checkpoint) ``` </pt> <tf> ```py >>> from transformers import DataCollatorForSeq2Seq >>> data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=checkpoint, return_tensors="tf") ``` </tf> </frameworkcontent> ## Evaluate Including a metric during training is often helpful for evaluating your model's performance. You can quickly load a evaluation method with the 🤗 [Evaluate](https://huggingface.co/docs/evaluate/index) library. For this task, load the [ROUGE](https://huggingface.co/spaces/evaluate-metric/rouge) metric (see the 🤗 Evaluate [quick tour](https://huggingface.co/docs/evaluate/a_quick_tour) to learn more about how to load and compute a metric): ```py >>> import evaluate >>> rouge = evaluate.load("rouge") ``` Then create a function that passes your predictions and labels to [`~evaluate.EvaluationModule.compute`] to calculate the ROUGE metric: ```py >>> import numpy as np >>> def compute_metrics(eval_pred): ... predictions, labels = eval_pred ... decoded_preds = tokenizer.batch_decode(predictions, skip_special_tokens=True) ... labels = np.where(labels != -100, labels, tokenizer.pad_token_id) ... decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True) ... result = rouge.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=True) ... prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in predictions] ... result["gen_len"] = np.mean(prediction_lens) ... return {k: round(v, 4) for k, v in result.items()} ``` Your `compute_metrics` function is ready to go now, and you'll return to it when you setup your training. ## Train <frameworkcontent> <pt> <Tip> If you aren't familiar with finetuning a model with the [`Trainer`], take a look at the basic tutorial [here](../training#train-with-pytorch-trainer)! </Tip> You're ready to start training your model now! Load T5 with [`AutoModelForSeq2SeqLM`]: ```py >>> from transformers import AutoModelForSeq2SeqLM, Seq2SeqTrainingArguments, Seq2SeqTrainer >>> model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint) ``` At this point, only three steps remain: 1. Define your training hyperparameters in [`Seq2SeqTrainingArguments`]. The only required parameter is `output_dir` which specifies where to save your model. You'll push this model to the Hub by setting `push_to_hub=True` (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the [`Trainer`] will evaluate the ROUGE metric and save the training checkpoint. 2. Pass the training arguments to [`Seq2SeqTrainer`] along with the model, dataset, tokenizer, data collator, and `compute_metrics` function. 3. Call [`~Trainer.train`] to finetune your model. ```py >>> training_args = Seq2SeqTrainingArguments( ... output_dir="my_awesome_billsum_model", ... evaluation_strategy="epoch", ... learning_rate=2e-5, ... per_device_train_batch_size=16, ... per_device_eval_batch_size=16, ... weight_decay=0.01, ... save_total_limit=3, ... num_train_epochs=4, ... predict_with_generate=True, ... fp16=True, ... push_to_hub=True, ... ) >>> trainer = Seq2SeqTrainer( ... model=model, ... args=training_args, ... train_dataset=tokenized_billsum["train"], ... eval_dataset=tokenized_billsum["test"], ... tokenizer=tokenizer, ... data_collator=data_collator, ... compute_metrics=compute_metrics, ... ) >>> trainer.train() ``` Once training is completed, share your model to the Hub with the [`~transformers.Trainer.push_to_hub`] method so everyone can use your model: ```py >>> trainer.push_to_hub() ``` </pt> <tf> <Tip> If you aren't familiar with finetuning a model with Keras, take a look at the basic tutorial [here](../training#train-a-tensorflow-model-with-keras)! </Tip> To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters: ```py >>> from transformers import create_optimizer, AdamWeightDecay >>> optimizer = AdamWeightDecay(learning_rate=2e-5, weight_decay_rate=0.01) ``` Then you can load T5 with [`TFAutoModelForSeq2SeqLM`]: ```py >>> from transformers import TFAutoModelForSeq2SeqLM >>> model = TFAutoModelForSeq2SeqLM.from_pretrained(checkpoint) ``` Convert your datasets to the `tf.data.Dataset` format with [`~transformers.TFPreTrainedModel.prepare_tf_dataset`]: ```py >>> tf_train_set = model.prepare_tf_dataset( ... tokenized_billsum["train"], ... shuffle=True, ... batch_size=16, ... collate_fn=data_collator, ... ) >>> tf_test_set = model.prepare_tf_dataset( ... tokenized_billsum["test"], ... shuffle=False, ... batch_size=16, ... collate_fn=data_collator, ... ) ``` Configure the model for training with [`compile`](https://keras.io/api/models/model_training_apis/#compile-method). Note that Transformers models all have a default task-relevant loss function, so you don't need to specify one unless you want to: ```py >>> import tensorflow as tf >>> model.compile(optimizer=optimizer) # No loss argument! ``` The last two things to setup before you start training is to compute the ROUGE score from the predictions, and provide a way to push your model to the Hub. Both are done by using [Keras callbacks](../main_classes/keras_callbacks). Pass your `compute_metrics` function to [`~transformers.KerasMetricCallback`]: ```py >>> from transformers.keras_callbacks import KerasMetricCallback >>> metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set) ``` Specify where to push your model and tokenizer in the [`~transformers.PushToHubCallback`]: ```py >>> from transformers.keras_callbacks import PushToHubCallback >>> push_to_hub_callback = PushToHubCallback( ... output_dir="my_awesome_billsum_model", ... tokenizer=tokenizer, ... ) ``` Then bundle your callbacks together: ```py >>> callbacks = [metric_callback, push_to_hub_callback] ``` Finally, you're ready to start training your model! Call [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) with your training and validation datasets, the number of epochs, and your callbacks to finetune the model: ```py >>> model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=3, callbacks=callbacks) ``` Once training is completed, your model is automatically uploaded to the Hub so everyone can use it! </tf> </frameworkcontent> <Tip> For a more in-depth example of how to finetune a model for summarization, take a look at the corresponding [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/summarization.ipynb) or [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/summarization-tf.ipynb). </Tip> ## Inference Great, now that you've finetuned a model, you can use it for inference! Come up with some text you'd like to summarize. For T5, you need to prefix your input depending on the task you're working on. For summarization you should prefix your input as shown below: ```py >>> text = "summarize: The Inflation Reduction Act lowers prescription drug costs, health care costs, and energy costs. It's the most aggressive action on tackling the climate crisis in American history, which will lift up American workers and create good-paying, union jobs across the country. It'll lower the deficit and ask the ultra-wealthy and corporations to pay their fair share. And no one making under $400,000 per year will pay a penny more in taxes." ``` The simplest way to try out your finetuned model for inference is to use it in a [`pipeline`]. Instantiate a `pipeline` for summarization with your model, and pass your text to it: ```py >>> from transformers import pipeline >>> summarizer = pipeline("summarization", model="stevhliu/my_awesome_billsum_model") >>> summarizer(text) [{"summary_text": "The Inflation Reduction Act lowers prescription drug costs, health care costs, and energy costs. It's the most aggressive action on tackling the climate crisis in American history, which will lift up American workers and create good-paying, union jobs across the country."}] ``` You can also manually replicate the results of the `pipeline` if you'd like: <frameworkcontent> <pt> Tokenize the text and return the `input_ids` as PyTorch tensors: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("stevhliu/my_awesome_billsum_model") >>> inputs = tokenizer(text, return_tensors="pt").input_ids ``` Use the [`~transformers.generation_utils.GenerationMixin.generate`] method to create the summarization. For more details about the different text generation strategies and parameters for controlling generation, check out the [Text Generation](../main_classes/text_generation) API. ```py >>> from transformers import AutoModelForSeq2SeqLM >>> model = AutoModelForSeq2SeqLM.from_pretrained("stevhliu/my_awesome_billsum_model") >>> outputs = model.generate(inputs, max_new_tokens=100, do_sample=False) ``` Decode the generated token ids back into text: ```py >>> tokenizer.decode(outputs[0], skip_special_tokens=True) 'the inflation reduction act lowers prescription drug costs, health care costs, and energy costs. it's the most aggressive action on tackling the climate crisis in american history. it will ask the ultra-wealthy and corporations to pay their fair share.' ``` </pt> <tf> Tokenize the text and return the `input_ids` as TensorFlow tensors: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("stevhliu/my_awesome_billsum_model") >>> inputs = tokenizer(text, return_tensors="tf").input_ids ``` Use the [`~transformers.generation_tf_utils.TFGenerationMixin.generate`] method to create the summarization. For more details about the different text generation strategies and parameters for controlling generation, check out the [Text Generation](../main_classes/text_generation) API. ```py >>> from transformers import TFAutoModelForSeq2SeqLM >>> model = TFAutoModelForSeq2SeqLM.from_pretrained("stevhliu/my_awesome_billsum_model") >>> outputs = model.generate(inputs, max_new_tokens=100, do_sample=False) ``` Decode the generated token ids back into text: ```py >>> tokenizer.decode(outputs[0], skip_special_tokens=True) 'the inflation reduction act lowers prescription drug costs, health care costs, and energy costs. it's the most aggressive action on tackling the climate crisis in american history. it will ask the ultra-wealthy and corporations to pay their fair share.' ``` </tf> </frameworkcontent>
huggingface/transformers/blob/main/docs/source/en/tasks/summarization.md
Metric Card for IndicGLUE ## Metric description This metric is used to compute the evaluation metric for the [IndicGLUE dataset](https://huggingface.co/datasets/indic_glue). IndicGLUE is a natural language understanding benchmark for Indian languages. It contains a wide variety of tasks and covers 11 major Indian languages - Assamese (`as`), Bengali (`bn`), Gujarati (`gu`), Hindi (`hi`), Kannada (`kn`), Malayalam (`ml`), Marathi(`mr`), Oriya(`or`), Panjabi (`pa`), Tamil(`ta`) and Telugu (`te`). ## How to use There are two steps: (1) loading the IndicGLUE metric relevant to the subset of the dataset being used for evaluation; and (2) calculating the metric. 1. **Loading the relevant IndicGLUE metric** : the subsets of IndicGLUE are the following: `wnli`, `copa`, `sna`, `csqa`, `wstp`, `inltkh`, `bbca`, `cvit-mkb-clsr`, `iitp-mr`, `iitp-pr`, `actsa-sc`, `md`, and`wiki-ner`. More information about the different subsets of the Indic GLUE dataset can be found on the [IndicGLUE dataset page](https://indicnlp.ai4bharat.org/indic-glue/). 2. **Calculating the metric**: the metric takes two inputs : one list with the predictions of the model to score and one lists of references for each translation for all subsets of the dataset except for `cvit-mkb-clsr`, where each prediction and reference is a vector of floats. ```python from datasets import load_metric indic_glue_metric = load_metric('indic_glue', 'wnli') references = [0, 1] predictions = [0, 1] results = indic_glue_metric.compute(predictions=predictions, references=references) ``` ## Output values The output of the metric depends on the IndicGLUE subset chosen, consisting of a dictionary that contains one or several of the following metrics: `accuracy`: the proportion of correct predictions among the total number of cases processed, with a range between 0 and 1 (see [accuracy](https://huggingface.co/metrics/accuracy) for more information). `f1`: the harmonic mean of the precision and recall (see [F1 score](https://huggingface.co/metrics/f1) for more information). Its range is 0-1 -- its lowest possible value is 0, if either the precision or the recall is 0, and its highest possible value is 1.0, which means perfect precision and recall. `precision@10`: the fraction of the true examples among the top 10 predicted examples, with a range between 0 and 1 (see [precision](https://huggingface.co/metrics/precision) for more information). The `cvit-mkb-clsr` subset returns `precision@10`, the `wiki-ner` subset returns `accuracy` and `f1`, and all other subsets of Indic GLUE return only accuracy. ### Values from popular papers The [original IndicGlue paper](https://aclanthology.org/2020.findings-emnlp.445.pdf) reported an average accuracy of 0.766 on the dataset, which varies depending on the subset selected. ## Examples Maximal values for the WNLI subset (which outputs `accuracy`): ```python from datasets import load_metric indic_glue_metric = load_metric('indic_glue', 'wnli') references = [0, 1] predictions = [0, 1] results = indic_glue_metric.compute(predictions=predictions, references=references) print(results) {'accuracy': 1.0} ``` Minimal values for the Wiki-NER subset (which outputs `accuracy` and `f1`): ```python >>> from datasets import load_metric >>> indic_glue_metric = load_metric('indic_glue', 'wiki-ner') >>> references = [0, 1] >>> predictions = [1,0] >>> results = indic_glue_metric.compute(predictions=predictions, references=references) >>> print(results) {'accuracy': 1.0, 'f1': 1.0} ``` Partial match for the CVIT-Mann Ki Baat subset (which outputs `precision@10`) ```python >>> from datasets import load_metric >>> indic_glue_metric = load_metric('indic_glue', 'cvit-mkb-clsr') >>> references = [[0.5, 0.5, 0.5], [0.1, 0.2, 0.3]] >>> predictions = [[0.5, 0.5, 0.5], [0.1, 0.2, 0.3]] >>> results = indic_glue_metric.compute(predictions=predictions, references=references) >>> print(results) {'precision@10': 1.0} ``` ## Limitations and bias This metric works only with datasets that have the same format as the [IndicGLUE dataset](https://huggingface.co/datasets/glue). ## Citation ```bibtex @inproceedings{kakwani2020indicnlpsuite, title={{IndicNLPSuite: Monolingual Corpora, Evaluation Benchmarks and Pre-trained Multilingual Language Models for Indian Languages}}, author={Divyanshu Kakwani and Anoop Kunchukuttan and Satish Golla and Gokul N.C. and Avik Bhattacharyya and Mitesh M. Khapra and Pratyush Kumar}, year={2020}, booktitle={Findings of EMNLP}, } ``` ## Further References - [IndicNLP website](https://indicnlp.ai4bharat.org/home/) -
huggingface/datasets/blob/main/metrics/indic_glue/README.md
!--- Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Performance and Scalability Training large transformer models and deploying them to production present various challenges. During training, the model may require more GPU memory than available or exhibit slow training speed. In the deployment phase, the model can struggle to handle the required throughput in a production environment. This documentation aims to assist you in overcoming these challenges and finding the optimal setting for your use-case. The guides are divided into training and inference sections, as each comes with different challenges and solutions. Within each section you'll find separate guides for different hardware configurations, such as single GPU vs. multi-GPU for training or CPU vs. GPU for inference. Use this document as your starting point to navigate further to the methods that match your scenario. ## Training Training large transformer models efficiently requires an accelerator such as a GPU or TPU. The most common case is where you have a single GPU. The methods that you can apply to improve training efficiency on a single GPU extend to other setups such as multiple GPU. However, there are also techniques that are specific to multi-GPU or CPU training. We cover them in separate sections. * [Methods and tools for efficient training on a single GPU](perf_train_gpu_one): start here to learn common approaches that can help optimize GPU memory utilization, speed up the training, or both. * [Multi-GPU training section](perf_train_gpu_many): explore this section to learn about further optimization methods that apply to a multi-GPU settings, such as data, tensor, and pipeline parallelism. * [CPU training section](perf_train_cpu): learn about mixed precision training on CPU. * [Efficient Training on Multiple CPUs](perf_train_cpu_many): learn about distributed CPU training. * [Training on TPU with TensorFlow](perf_train_tpu_tf): if you are new to TPUs, refer to this section for an opinionated introduction to training on TPUs and using XLA. * [Custom hardware for training](perf_hardware): find tips and tricks when building your own deep learning rig. * [Hyperparameter Search using Trainer API](hpo_train) ## Inference Efficient inference with large models in a production environment can be as challenging as training them. In the following sections we go through the steps to run inference on CPU and single/multi-GPU setups. * [Inference on a single CPU](perf_infer_cpu) * [Inference on a single GPU](perf_infer_gpu_one) * [Multi-GPU inference](perf_infer_gpu_one) * [XLA Integration for TensorFlow Models](tf_xla) ## Training and inference Here you'll find techniques, tips and tricks that apply whether you are training a model, or running inference with it. * [Instantiating a big model](big_models) * [Troubleshooting performance issues](debugging) ## Contribute This document is far from being complete and a lot more needs to be added, so if you have additions or corrections to make please don't hesitate to open a PR or if you aren't sure start an Issue and we can discuss the details there. When making contributions that A is better than B, please try to include a reproducible benchmark and/or a link to the source of that information (unless it comes directly from you).
huggingface/transformers/blob/main/docs/source/en/performance.md
!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # AsymmetricAutoencoderKL Improved larger variational autoencoder (VAE) model with KL loss for inpainting task: [Designing a Better Asymmetric VQGAN for StableDiffusion](https://arxiv.org/abs/2306.04632) by Zixin Zhu, Xuelu Feng, Dongdong Chen, Jianmin Bao, Le Wang, Yinpeng Chen, Lu Yuan, Gang Hua. The abstract from the paper is: *StableDiffusion is a revolutionary text-to-image generator that is causing a stir in the world of image generation and editing. Unlike traditional methods that learn a diffusion model in pixel space, StableDiffusion learns a diffusion model in the latent space via a VQGAN, ensuring both efficiency and quality. It not only supports image generation tasks, but also enables image editing for real images, such as image inpainting and local editing. However, we have observed that the vanilla VQGAN used in StableDiffusion leads to significant information loss, causing distortion artifacts even in non-edited image regions. To this end, we propose a new asymmetric VQGAN with two simple designs. Firstly, in addition to the input from the encoder, the decoder contains a conditional branch that incorporates information from task-specific priors, such as the unmasked image region in inpainting. Secondly, the decoder is much heavier than the encoder, allowing for more detailed recovery while only slightly increasing the total inference cost. The training cost of our asymmetric VQGAN is cheap, and we only need to retrain a new asymmetric decoder while keeping the vanilla VQGAN encoder and StableDiffusion unchanged. Our asymmetric VQGAN can be widely used in StableDiffusion-based inpainting and local editing methods. Extensive experiments demonstrate that it can significantly improve the inpainting and editing performance, while maintaining the original text-to-image capability. The code is available at https://github.com/buxiangzhiren/Asymmetric_VQGAN* Evaluation results can be found in section 4.1 of the original paper. ## Available checkpoints * [https://huggingface.co/cross-attention/asymmetric-autoencoder-kl-x-1-5](https://huggingface.co/cross-attention/asymmetric-autoencoder-kl-x-1-5) * [https://huggingface.co/cross-attention/asymmetric-autoencoder-kl-x-2](https://huggingface.co/cross-attention/asymmetric-autoencoder-kl-x-2) ## Example Usage ```python from diffusers import AsymmetricAutoencoderKL, StableDiffusionInpaintPipeline from diffusers.utils import load_image, make_image_grid prompt = "a photo of a person with beard" img_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/repaint/celeba_hq_256.png" mask_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/repaint/mask_256.png" original_image = load_image(img_url).resize((512, 512)) mask_image = load_image(mask_url).resize((512, 512)) pipe = StableDiffusionInpaintPipeline.from_pretrained("runwayml/stable-diffusion-inpainting") pipe.vae = AsymmetricAutoencoderKL.from_pretrained("cross-attention/asymmetric-autoencoder-kl-x-1-5") pipe.to("cuda") image = pipe(prompt=prompt, image=original_image, mask_image=mask_image).images[0] make_image_grid([original_image, mask_image, image], rows=1, cols=3) ``` ## AsymmetricAutoencoderKL [[autodoc]] models.autoencoders.autoencoder_asym_kl.AsymmetricAutoencoderKL ## AutoencoderKLOutput [[autodoc]] models.autoencoders.autoencoder_kl.AutoencoderKLOutput ## DecoderOutput [[autodoc]] models.autoencoders.vae.DecoderOutput
huggingface/diffusers/blob/main/docs/source/en/api/models/asymmetricautoencoderkl.md
ResNet-D **ResNet-D** is a modification on the [ResNet](https://paperswithcode.com/method/resnet) architecture that utilises an [average pooling](https://paperswithcode.com/method/average-pooling) tweak for downsampling. The motivation is that in the unmodified ResNet, the [1×1 convolution](https://paperswithcode.com/method/1x1-convolution) for the downsampling block ignores 3/4 of input feature maps, so this is modified so no information will be ignored ## How do I use this model on an image? To load a pretrained model: ```python import timm model = timm.create_model('resnet101d', pretrained=True) model.eval() ``` To load and preprocess the image: ```python import urllib from PIL import Image from timm.data import resolve_data_config from timm.data.transforms_factory import create_transform config = resolve_data_config({}, model=model) transform = create_transform(**config) url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg") urllib.request.urlretrieve(url, filename) img = Image.open(filename).convert('RGB') tensor = transform(img).unsqueeze(0) # transform and add batch dimension ``` To get the model predictions: ```python import torch with torch.no_grad(): out = model(tensor) probabilities = torch.nn.functional.softmax(out[0], dim=0) print(probabilities.shape) # prints: torch.Size([1000]) ``` To get the top-5 predictions class names: ```python # Get imagenet class mappings url, filename = ("https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt", "imagenet_classes.txt") urllib.request.urlretrieve(url, filename) with open("imagenet_classes.txt", "r") as f: categories = [s.strip() for s in f.readlines()] # Print top categories per image top5_prob, top5_catid = torch.topk(probabilities, 5) for i in range(top5_prob.size(0)): print(categories[top5_catid[i]], top5_prob[i].item()) # prints class names and probabilities like: # [('Samoyed', 0.6425196528434753), ('Pomeranian', 0.04062102362513542), ('keeshond', 0.03186424449086189), ('white wolf', 0.01739676296710968), ('Eskimo dog', 0.011717947199940681)] ``` Replace the model name with the variant you want to use, e.g. `resnet101d`. You can find the IDs in the model summaries at the top of this page. To extract image features with this model, follow the [timm feature extraction examples](https://rwightman.github.io/pytorch-image-models/feature_extraction/), just change the name of the model you want to use. ## How do I finetune this model? You can finetune any of the pre-trained models just by changing the classifier (the last layer). ```python model = timm.create_model('resnet101d', pretrained=True, num_classes=NUM_FINETUNE_CLASSES) ``` To finetune on your own dataset, you have to write a training loop or adapt [timm's training script](https://github.com/rwightman/pytorch-image-models/blob/master/train.py) to use your dataset. ## How do I train this model? You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh. ## Citation ```BibTeX @misc{he2018bag, title={Bag of Tricks for Image Classification with Convolutional Neural Networks}, author={Tong He and Zhi Zhang and Hang Zhang and Zhongyue Zhang and Junyuan Xie and Mu Li}, year={2018}, eprint={1812.01187}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` <!-- Type: model-index Collections: - Name: ResNet-D Paper: Title: Bag of Tricks for Image Classification with Convolutional Neural Networks URL: https://paperswithcode.com/paper/bag-of-tricks-for-image-classification-with Models: - Name: resnet101d In Collection: ResNet-D Metadata: FLOPs: 13805639680 Parameters: 44570000 File Size: 178791263 Architecture: - 1x1 Convolution - Batch Normalization - Bottleneck Residual Block - Convolution - Global Average Pooling - Max Pooling - ReLU - Residual Block - Residual Connection - Softmax Tasks: - Image Classification Training Data: - ImageNet ID: resnet101d Crop Pct: '0.94' Image Size: '256' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnet.py#L716 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnet101d_ra2-2803ffab.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 82.31% Top 5 Accuracy: 96.06% - Name: resnet152d In Collection: ResNet-D Metadata: FLOPs: 20155275264 Parameters: 60210000 File Size: 241596837 Architecture: - 1x1 Convolution - Batch Normalization - Bottleneck Residual Block - Convolution - Global Average Pooling - Max Pooling - ReLU - Residual Block - Residual Connection - Softmax Tasks: - Image Classification Training Data: - ImageNet ID: resnet152d Crop Pct: '0.94' Image Size: '256' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnet.py#L724 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnet152d_ra2-5cac0439.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 83.13% Top 5 Accuracy: 96.35% - Name: resnet18d In Collection: ResNet-D Metadata: FLOPs: 2645205760 Parameters: 11710000 File Size: 46893231 Architecture: - 1x1 Convolution - Batch Normalization - Bottleneck Residual Block - Convolution - Global Average Pooling - Max Pooling - ReLU - Residual Block - Residual Connection - Softmax Tasks: - Image Classification Training Data: - ImageNet ID: resnet18d Crop Pct: '0.875' Image Size: '224' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnet.py#L649 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnet18d_ra2-48a79e06.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 72.27% Top 5 Accuracy: 90.69% - Name: resnet200d In Collection: ResNet-D Metadata: FLOPs: 26034378752 Parameters: 64690000 File Size: 259662933 Architecture: - 1x1 Convolution - Batch Normalization - Bottleneck Residual Block - Convolution - Global Average Pooling - Max Pooling - ReLU - Residual Block - Residual Connection - Softmax Tasks: - Image Classification Training Data: - ImageNet ID: resnet200d Crop Pct: '0.94' Image Size: '256' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnet.py#L749 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnet200d_ra2-bdba9bf9.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 83.24% Top 5 Accuracy: 96.49% - Name: resnet26d In Collection: ResNet-D Metadata: FLOPs: 3335276032 Parameters: 16010000 File Size: 64209122 Architecture: - 1x1 Convolution - Batch Normalization - Bottleneck Residual Block - Convolution - Global Average Pooling - Max Pooling - ReLU - Residual Block - Residual Connection - Softmax Tasks: - Image Classification Training Data: - ImageNet ID: resnet26d Crop Pct: '0.875' Image Size: '224' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnet.py#L683 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnet26d-69e92c46.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 76.69% Top 5 Accuracy: 93.15% - Name: resnet34d In Collection: ResNet-D Metadata: FLOPs: 5026601728 Parameters: 21820000 File Size: 87369807 Architecture: - 1x1 Convolution - Batch Normalization - Bottleneck Residual Block - Convolution - Global Average Pooling - Max Pooling - ReLU - Residual Block - Residual Connection - Softmax Tasks: - Image Classification Training Data: - ImageNet ID: resnet34d Crop Pct: '0.875' Image Size: '224' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnet.py#L666 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnet34d_ra2-f8dcfcaf.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 77.11% Top 5 Accuracy: 93.38% - Name: resnet50d In Collection: ResNet-D Metadata: FLOPs: 5591002624 Parameters: 25580000 File Size: 102567109 Architecture: - 1x1 Convolution - Batch Normalization - Bottleneck Residual Block - Convolution - Global Average Pooling - Max Pooling - ReLU - Residual Block - Residual Connection - Softmax Tasks: - Image Classification Training Data: - ImageNet ID: resnet50d Crop Pct: '0.875' Image Size: '224' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnet.py#L699 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnet50d_ra2-464e36ba.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 80.55% Top 5 Accuracy: 95.16% -->
huggingface/pytorch-image-models/blob/main/docs/models/resnet-d.md
Contributor Covenant Code of Conduct ## Our Pledge We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation. We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community. ## Our Standards Examples of behavior that contributes to a positive environment for our community include: * Demonstrating empathy and kindness toward other people * Being respectful of differing opinions, viewpoints, and experiences * Giving and gracefully accepting constructive feedback * Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience * Focusing on what is best not just for us as individuals, but for the overall community Examples of unacceptable behavior include: * The use of sexualized language or imagery, and sexual attention or advances of any kind * Trolling, insulting or derogatory comments, and personal or political attacks * Public or private harassment * Publishing others' private information, such as a physical or email address, without their explicit permission * Other conduct which could reasonably be considered inappropriate in a professional setting ## Enforcement Responsibilities Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful. Community leaders have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate. ## Scope This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. ## Enforcement Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement at feedback@huggingface.co. All complaints will be reviewed and investigated promptly and fairly. All community leaders are obligated to respect the privacy and security of the reporter of any incident. ## Enforcement Guidelines Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct: ### 1. Correction **Community Impact**: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community. **Consequence**: A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate. A public apology may be requested. ### 2. Warning **Community Impact**: A violation through a single incident or series of actions. **Consequence**: A warning with consequences for continued behavior. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban. ### 3. Temporary Ban **Community Impact**: A serious violation of community standards, including sustained inappropriate behavior. **Consequence**: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban. ### 4. Permanent Ban **Community Impact**: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals. **Consequence**: A permanent ban from any sort of public interaction within the community. ## Attribution This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 2.0, available at [https://www.contributor-covenant.org/version/2/0/code_of_conduct.html][v2.0]. Community Impact Guidelines were inspired by [Mozilla's code of conduct enforcement ladder][Mozilla CoC]. For answers to common questions about this code of conduct, see the FAQ at [https://www.contributor-covenant.org/faq][FAQ]. Translations are available at [https://www.contributor-covenant.org/translations][translations]. [homepage]: https://www.contributor-covenant.org [v2.0]: https://www.contributor-covenant.org/version/2/0/code_of_conduct.html [Mozilla CoC]: https://github.com/mozilla/diversity [FAQ]: https://www.contributor-covenant.org/faq [translations]: https://www.contributor-covenant.org/translations
huggingface/simulate/blob/main/CODE_OF_CONDUCT.md
`@gradio/upload` ```html <script> import { Upload, ModifyUpload, normalise_file, get_fetchable_url_or_file, upload, prepare_files } from "@gradio/upload"; </script> ``` Upload ```javascript export let filetype: string | null = null; export let dragging = false; export let boundedheight = true; export let center = true; export let flex = true; export let file_count = "single"; export let disable_click = false; export let root: string; export let hidden = false; ``` ModifyUpload ```javascript export let editable = false; export let undoable = false; export let absolute = true; export let i18n: I18nFormatter; ``` ```javascript export function normalise_file( file: FileData | null, server_url: string, proxy_url: string | null ): FileData | null; export function normalise_file( file: FileData[] | null, server_url: string, proxy_url: string | null ): FileData[] | null; export function normalise_file( file: FileData[] | FileData | null, server_url: string, // root: string, proxy_url: string | null // root_url: string | null ): FileData[] | FileData | null; export function normalise_file( file: FileData[] | FileData | null, server_url: string, // root: string, proxy_url: string | null // root_url: string | null ): FileData[] | FileData | null; export function get_fetchable_url_or_file( path: string | null, server_url: string, proxy_url: string | null ): string export async function upload( file_data: FileData[], root: string, upload_fn: typeof upload_files = upload_files ): Promise<(FileData | null)[] | null> export async function prepare_files( files: File[], is_stream?: boolean ): Promise<FileData[]> { return files.map( (f, i) => new FileData({ path: f.name, orig_name: f.name, blob: f, size: f.size, mime_type: f.type, is_stream }) ); } ```
gradio-app/gradio/blob/main/js/upload/README.md
Gradio Demo: hello_blocks_decorator ``` !pip install -q gradio ``` ``` import gradio as gr with gr.Blocks() as demo: name = gr.Textbox(label="Name") output = gr.Textbox(label="Output Box") greet_btn = gr.Button("Greet") @greet_btn.click(inputs=name, outputs=output) def greet(name): return "Hello " + name + "!" if __name__ == "__main__": demo.launch() ```
gradio-app/gradio/blob/main/demo/hello_blocks_decorator/run.ipynb
!--Copyright 2021 NVIDIA Corporation and The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # MegatronBERT ## Overview The MegatronBERT model was proposed in [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro. The abstract from the paper is the following: *Recent work in language modeling demonstrates that training large transformer models advances the state of the art in Natural Language Processing applications. However, very large models can be quite difficult to train due to memory constraints. In this work, we present our techniques for training very large transformer models and implement a simple, efficient intra-layer model parallel approach that enables training transformer models with billions of parameters. Our approach does not require a new compiler or library changes, is orthogonal and complimentary to pipeline model parallelism, and can be fully implemented with the insertion of a few communication operations in native PyTorch. We illustrate this approach by converging transformer based models up to 8.3 billion parameters using 512 GPUs. We sustain 15.1 PetaFLOPs across the entire application with 76% scaling efficiency when compared to a strong single GPU baseline that sustains 39 TeraFLOPs, which is 30% of peak FLOPs. To demonstrate that large language models can further advance the state of the art (SOTA), we train an 8.3 billion parameter transformer language model similar to GPT-2 and a 3.9 billion parameter model similar to BERT. We show that careful attention to the placement of layer normalization in BERT-like models is critical to achieving increased performance as the model size grows. Using the GPT-2 model we achieve SOTA results on the WikiText103 (10.8 compared to SOTA perplexity of 15.8) and LAMBADA (66.5% compared to SOTA accuracy of 63.2%) datasets. Our BERT model achieves SOTA results on the RACE dataset (90.9% compared to SOTA accuracy of 89.4%).* This model was contributed by [jdemouth](https://huggingface.co/jdemouth). The original code can be found [here](https://github.com/NVIDIA/Megatron-LM). That repository contains a multi-GPU and multi-node implementation of the Megatron Language models. In particular, it contains a hybrid model parallel approach using "tensor parallel" and "pipeline parallel" techniques. ## Usage tips We have provided pretrained [BERT-345M](https://ngc.nvidia.com/catalog/models/nvidia:megatron_bert_345m) checkpoints for use to evaluate or finetuning downstream tasks. To access these checkpoints, first [sign up](https://ngc.nvidia.com/signup) for and setup the NVIDIA GPU Cloud (NGC) Registry CLI. Further documentation for downloading models can be found in the [NGC documentation](https://docs.nvidia.com/dgx/ngc-registry-cli-user-guide/index.html#topic_6_4_1). Alternatively, you can directly download the checkpoints using: BERT-345M-uncased: ```bash wget --content-disposition https://api.ngc.nvidia.com/v2/models/nvidia/megatron_bert_345m/versions/v0.1_uncased/zip -O megatron_bert_345m_v0_1_uncased.zip ``` BERT-345M-cased: ```bash wget --content-disposition https://api.ngc.nvidia.com/v2/models/nvidia/megatron_bert_345m/versions/v0.1_cased/zip -O megatron_bert_345m_v0_1_cased.zip ``` Once you have obtained the checkpoints from NVIDIA GPU Cloud (NGC), you have to convert them to a format that will easily be loaded by Hugging Face Transformers and our port of the BERT code. The following commands allow you to do the conversion. We assume that the folder `models/megatron_bert` contains `megatron_bert_345m_v0_1_{cased, uncased}.zip` and that the commands are run from inside that folder: ```bash python3 $PATH_TO_TRANSFORMERS/models/megatron_bert/convert_megatron_bert_checkpoint.py megatron_bert_345m_v0_1_uncased.zip ``` ```bash python3 $PATH_TO_TRANSFORMERS/models/megatron_bert/convert_megatron_bert_checkpoint.py megatron_bert_345m_v0_1_cased.zip ``` ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Causal language modeling task guide](../tasks/language_modeling) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice) ## MegatronBertConfig [[autodoc]] MegatronBertConfig ## MegatronBertModel [[autodoc]] MegatronBertModel - forward ## MegatronBertForMaskedLM [[autodoc]] MegatronBertForMaskedLM - forward ## MegatronBertForCausalLM [[autodoc]] MegatronBertForCausalLM - forward ## MegatronBertForNextSentencePrediction [[autodoc]] MegatronBertForNextSentencePrediction - forward ## MegatronBertForPreTraining [[autodoc]] MegatronBertForPreTraining - forward ## MegatronBertForSequenceClassification [[autodoc]] MegatronBertForSequenceClassification - forward ## MegatronBertForMultipleChoice [[autodoc]] MegatronBertForMultipleChoice - forward ## MegatronBertForTokenClassification [[autodoc]] MegatronBertForTokenClassification - forward ## MegatronBertForQuestionAnswering [[autodoc]] MegatronBertForQuestionAnswering - forward
huggingface/transformers/blob/main/docs/source/en/model_doc/megatron-bert.md
-- title: "Fetch Cuts ML Processing Latency by 50% Using Amazon SageMaker & Hugging Face" thumbnail: /blog/assets/78_ml_director_insights/fetch.png authors: - user: Violette --- # Fetch Cuts ML Processing Latency by 50% Using Amazon SageMaker & Hugging Face _This article is a cross-post from an originally published post on September 2023 [on AWS's website](https://aws.amazon.com/fr/solutions/case-studies/fetch-case-study/)._ ## Overview Consumer engagement and rewards company [Fetch](https://fetch.com/) offers an application that lets users earn rewards on their purchases by scanning their receipts. The company also parses these receipts to generate insights into consumer behavior and provides those insights to brand partners. As weekly scans rapidly grew, Fetch needed to improve its speed and precision. On Amazon Web Services (AWS), Fetch optimized its machine learning (ML) pipeline using Hugging Face and [Amazon SageMaker ](https://aws.amazon.com/sagemaker/), a service for building, training, and deploying ML models with fully managed infrastructure, tools, and workflows. Now, the Fetch app can process scans faster and with significantly higher accuracy. ## Opportunity | Using Amazon SageMaker to Accelerate an ML Pipeline in 12 Months for Fetch Using the Fetch app, customers can scan receipts, receive points, and redeem those points for gift cards. To reward users for receipt scans instantaneously, Fetch needed to be able to capture text from a receipt, extract the pertinent data, and structure it so that the rest of its system can process and analyze it. With over 80 million receipts processed per week—hundreds of receipts per second at peak traffic—it needed to perform this process quickly, accurately, and at scale. In 2021, Fetch set out to optimize its app’s scanning functionality. Fetch is an AWS-native company, and its ML operations team was already using Amazon SageMaker for many of its models. This made the decision to enhance its ML pipeline by migrating its models to Amazon SageMaker a straightforward one. Throughout the project, Fetch had weekly calls with the AWS team and received support from a subject matter expert whom AWS paired with Fetch. The company built, trained, and deployed more than five ML models using Amazon SageMaker in 12 months. In late 2022, Fetch rolled out its updated mobile app and new ML pipeline. #### "Amazon SageMaker is a game changer for Fetch. We use almost every feature extensively. As new features come out, they are immediately valuable. It’s hard to imagine having done this project without the features of Amazon SageMaker.” Sam Corzine, Machine Learning Engineer, Fetch ## Solution | Cutting Latency by 50% Using ML & Hugging Face on Amazon SageMaker GPU Instances #### "Using the flexibility of the Hugging Face AWS Deep Learning Container, we could improve the quality of our models,and Hugging Face’s partnership with AWS meant that it was simple to deploy these models.” Sam Corzine, Machine Learning Engineer, Fetch Fetch’s ML pipeline is powered by several Amazon SageMaker features, particularly [Amazon SageMaker Model Training](https://aws.amazon.com/sagemaker/train/), which reduces the time and cost to train and tune ML models at scale, and [Amazon SageMaker Processing](https://docs.aws.amazon.com/sagemaker/latest/dg/processing-job.html), a simplified, managed experience to run data-processing workloads. The company runs its custom ML models using multi-GPU instances for fast performance. “The GPU instances on Amazon SageMaker are simple to use,” says Ellen Light, backend engineer at Fetch. Fetch trains these models to identify and extract key information on receipts that the company can use to generate valuable insights and reward users. And on Amazon SageMaker, Fetch’s custom ML system is seamlessly scalable. “By using Amazon SageMaker, we have a simple way to scale up our systems, especially for inference and runtime,” says Sam Corzine, ML engineer at Fetch. Meanwhile, standardized model deployments mean less manual work. Fetch heavily relied on the ML training features of Amazon SageMaker, particularly its [training jobs](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateTrainingJob.html), as it refined and iterated on its models. Fetch can also train ML models in parallel, which speeds up development and deployments. “There’s little friction for us to deploy models,” says Alec Stashevsky, applied scientist at Fetch. “Basically, we don’t have to think about it.” This has increased confidence and improved productivity for the entire company. In one example, a new intern was able to deploy a model himself by his third day on the job. Since adopting Amazon SageMaker for ML tuning, training, and retraining, Fetch has enhanced the accuracy of its document-understanding model by 200 percent. It continues to fine-tune its models for further improvement. “Amazon SageMaker has been a key tool in building these outstanding models,” says Quency Yu, ML engineer at Fetch. To optimize the tuning process, Fetch relies on [Amazon SageMaker Inference Recommender](https://docs.aws.amazon.com/sagemaker/latest/dg/inference-recommender.html), a capability of Amazon SageMaker that reduces the time required to get ML models in production by automating load testing and model tuning. In addition to its custom ML models, Fetch uses [AWS Deep Learning Containers ](https://aws.amazon.com/machine-learning/containers/)(AWS DL Containers), which businesses can use to quickly deploy deep learning environments with optimized, prepackaged container images. This simplifies the process of using libraries from [Hugging Face Inc.](https://huggingface.co/)(Hugging Face), an artificial intelligence technology company and [AWS Partner](https://partners.amazonaws.com/partners/0010h00001jBrjVAAS/Hugging%20Face%20Inc.). Specifically, Fetch uses the Amazon SageMaker Hugging Face Inference Toolkit, an open-source library for serving transformers models, and the Hugging Face AWS Deep Learning Container for training and inference. “Using the flexibility of the Hugging Face AWS Deep Learning Container, we could improve the quality of our models,” says Corzine. “And Hugging Face’s partnership with AWS meant that it was simple to deploy these models.” For every metric that Fetch measures, performance has improved since adopting Amazon SageMaker. The company has reduced latency for its slowest scans by 50 percent. “Our improved accuracy also creates confidence in our data among partners,” says Corzine. With more confidence, partners will increase their use of Fetch’s solution. “Being able to meaningfully improve accuracy on literally every data point using Amazon SageMaker is a huge benefit and propagates throughout our entire business,” says Corzine. Fetch can now extract more types of data from a receipt, and it has the flexibility to structure resulting insights according to the specific needs of brand partners. “Leaning into ML has unlocked the ability to extract exactly what our partners want from a receipt,” says Corzine. “Partners can make new types of offers because of our investment in ML, and that’s a huge additional benefit for them.” Users enjoy the updates too; Fetch has grown from 10 million to 18 million monthly active users since it released the new version. “Amazon SageMaker is a game changer for Fetch,” says Corzine. “We use almost every feature extensively. As new features come out, they are immediately valuable. It’s hard to imagine having done this project without the features of Amazon SageMaker.” For example, Fetch migrated from a custom shadow testing pipeline to [Amazon SageMaker shadow testing](https://aws.amazon.com/sagemaker/shadow-testing/)—which validates the performance of new ML models against production models to prevent outages. Now, shadow testing is more direct because Fetch can directly compare performance with production traffic. ## Outcome | Expanding ML to New Use Cases The ML team at Fetch is continually working on new models and iterating on existing ones to tune them for better performance. “Another thing we like is being able to keep our technology stack up to date with new features of Amazon SageMaker,” says Chris Lee, ML developer at Fetch. The company will continue expanding its use of AWS to different ML use cases, such as fraud prevention, across multiple teams. Already one of the biggest consumer engagement software companies, Fetch aims to continue growing. “AWS is a key part of how we plan to scale, and we’ll lean into the features of Amazon SageMaker to continue improving our accuracy,” says Corzine. ## About Fetch Fetch is a consumer engagement company that provides insights on consumer purchases to brand partners. It also offers a mobile rewards app that lets users earn rewards on purchases through a receipt-scanning feature. _If you need support in using Hugging Face on SageMaker for your company, please contact us [here](https://huggingface.co/support#form) - our team will contact you to discuss your requirements!_
huggingface/blog/blob/main/fetch-case-study.md
!--- Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> ## Translation This directory contains examples for finetuning and evaluating transformers on translation tasks. Please tag @patil-suraj with any issues/unexpected behaviors, or send a PR! For deprecated `bertabs` instructions, see [`bertabs/README.md`](https://github.com/huggingface/transformers/blob/main/examples/research_projects/bertabs/README.md). For the old `finetune_trainer.py` and related utils, see [`examples/legacy/seq2seq`](https://github.com/huggingface/transformers/blob/main/examples/legacy/seq2seq). ### Supported Architectures - `BartForConditionalGeneration` - `FSMTForConditionalGeneration` (translation only) - `MBartForConditionalGeneration` - `MarianMTModel` - `PegasusForConditionalGeneration` - `T5ForConditionalGeneration` - `MT5ForConditionalGeneration` `run_translation.py` is a lightweight examples of how to download and preprocess a dataset from the [🤗 Datasets](https://github.com/huggingface/datasets) library or use your own files (jsonlines or csv), then fine-tune one of the architectures above on it. For custom datasets in `jsonlines` format please see: https://huggingface.co/docs/datasets/loading_datasets#json-files and you also will find examples of these below. ## With Trainer Here is an example of a translation fine-tuning with a MarianMT model: ```bash python examples/pytorch/translation/run_translation.py \ --model_name_or_path Helsinki-NLP/opus-mt-en-ro \ --do_train \ --do_eval \ --source_lang en \ --target_lang ro \ --dataset_name wmt16 \ --dataset_config_name ro-en \ --output_dir /tmp/tst-translation \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` MBart and some T5 models require special handling. T5 models `t5-small`, `t5-base`, `t5-large`, `t5-3b` and `t5-11b` must use an additional argument: `--source_prefix "translate {source_lang} to {target_lang}"`. For example: ```bash python examples/pytorch/translation/run_translation.py \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --source_lang en \ --target_lang ro \ --source_prefix "translate English to Romanian: " \ --dataset_name wmt16 \ --dataset_config_name ro-en \ --output_dir /tmp/tst-translation \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` If you get a terrible BLEU score, make sure that you didn't forget to use the `--source_prefix` argument. For the aforementioned group of T5 models it's important to remember that if you switch to a different language pair, make sure to adjust the source and target values in all 3 language-specific command line argument: `--source_lang`, `--target_lang` and `--source_prefix`. MBart models require a different format for `--source_lang` and `--target_lang` values, e.g. instead of `en` it expects `en_XX`, for `ro` it expects `ro_RO`. The full MBart specification for language codes can be found [here](https://huggingface.co/facebook/mbart-large-cc25). For example: ```bash python examples/pytorch/translation/run_translation.py \ --model_name_or_path facebook/mbart-large-en-ro \ --do_train \ --do_eval \ --dataset_name wmt16 \ --dataset_config_name ro-en \ --source_lang en_XX \ --target_lang ro_RO \ --output_dir /tmp/tst-translation \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` And here is how you would use the translation finetuning on your own files, after adjusting the values for the arguments `--train_file`, `--validation_file` to match your setup: ```bash python examples/pytorch/translation/run_translation.py \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --source_lang en \ --target_lang ro \ --source_prefix "translate English to Romanian: " \ --dataset_name wmt16 \ --dataset_config_name ro-en \ --train_file path_to_jsonlines_file \ --validation_file path_to_jsonlines_file \ --output_dir /tmp/tst-translation \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` The task of translation supports only custom JSONLINES files, with each line being a dictionary with a key `"translation"` and its value another dictionary whose keys is the language pair. For example: ```json { "translation": { "en": "Others have dismissed him as a joke.", "ro": "Alții l-au numit o glumă." } } { "translation": { "en": "And some are holding out for an implosion.", "ro": "Iar alții așteaptă implozia." } } ``` Here the languages are Romanian (`ro`) and English (`en`). If you want to use a pre-processed dataset that leads to high BLEU scores, but for the `en-de` language pair, you can use `--dataset_name stas/wmt14-en-de-pre-processed`, as following: ```bash python examples/pytorch/translation/run_translation.py \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --source_lang en \ --target_lang de \ --source_prefix "translate English to German: " \ --dataset_name stas/wmt14-en-de-pre-processed \ --output_dir /tmp/tst-translation \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` ## With Accelerate Based on the script [`run_translation_no_trainer.py`](https://github.com/huggingface/transformers/blob/main/examples/pytorch/translation/run_translation_no_trainer.py). Like `run_translation.py`, this script allows you to fine-tune any of the models supported on a translation task, the main difference is that this script exposes the bare training loop, to allow you to quickly experiment and add any customization you would like. It offers less options than the script with `Trainer` (for instance you can easily change the options for the optimizer or the dataloaders directly in the script) but still run in a distributed setup, on TPU and supports mixed precision by the mean of the [🤗 `Accelerate`](https://github.com/huggingface/accelerate) library. You can use the script normally after installing it: ```bash pip install git+https://github.com/huggingface/accelerate ``` then ```bash python run_translation_no_trainer.py \ --model_name_or_path Helsinki-NLP/opus-mt-en-ro \ --source_lang en \ --target_lang ro \ --dataset_name wmt16 \ --dataset_config_name ro-en \ --output_dir ~/tmp/tst-translation ``` You can then use your usual launchers to run in it in a distributed environment, but the easiest way is to run ```bash accelerate config ``` and reply to the questions asked. Then ```bash accelerate test ``` that will check everything is ready for training. Finally, you can launch training with ```bash accelerate launch run_translation_no_trainer.py \ --model_name_or_path Helsinki-NLP/opus-mt-en-ro \ --source_lang en \ --target_lang ro \ --dataset_name wmt16 \ --dataset_config_name ro-en \ --output_dir ~/tmp/tst-translation ``` This command is the same and will work for: - a CPU-only setup - a setup with one GPU - a distributed training with several GPUs (single or multi node) - a training on TPUs Note that this library is in alpha release so your feedback is more than welcome if you encounter any problem using it.
huggingface/transformers/blob/main/examples/pytorch/translation/README.md
Using Sentence Transformers at Hugging Face `sentence-transformers` is a library that provides easy methods to compute embeddings (dense vector representations) for sentences, paragraphs and images. Texts are embedded in a vector space such that similar text is close, which enables applications such as semantic search, clustering, and retrieval. ## Exploring sentence-transformers in the Hub You can find over 500 hundred `sentence-transformer` models by filtering at the left of the [models page](https://huggingface.co/models?library=sentence-transformers&sort=downloads). Most of these models support different tasks, such as doing [`feature-extraction`](https://huggingface.co/models?library=sentence-transformers&pipeline_tag=feature-extraction&sort=downloads) to generate the embedding, and [`sentence-similarity`](https://huggingface.co/models?library=sentence-transformers&pipeline_tag=sentence-similarity&sort=downloads) as a way to determine how similar is a given sentence to other. You can also find an overview of the official pre-trained models in [the official docs](https://www.sbert.net/docs/pretrained_models.html). All models on the Hub come up with features: 1. An automatically generated model card with a description, example code snippets, architecture overview, and more. 2. Metadata tags that help for discoverability and contain information such as license. 3. An interactive widget you can use to play out with the model directly in the browser. 4. An Inference API that allows to make inference requests. <div class="flex justify-center"> <img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/libraries-sentence_transformers_widget.png"/> <img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/libraries-sentence_transformers_widget-dark.png"/> </div> ## Using existing models The pre-trained models on the Hub can be loaded with a single line of code ```py from sentence_transformers import SentenceTransformer model = SentenceTransformer('model_name') ``` Here is an example that encodes sentences and then computes the distance between them for doing semantic search. ```py from sentence_transformers import SentenceTransformer, util model = SentenceTransformer('multi-qa-MiniLM-L6-cos-v1') query_embedding = model.encode('How big is London') passage_embedding = model.encode(['London has 9,787,426 inhabitants at the 2011 census', 'London is known for its finacial district']) print("Similarity:", util.dot_score(query_embedding, passage_embedding)) ``` If you want to see how to load a specific model, you can click `Use in sentence-transformers` and you will be given a working snippet that you can load it! <div class="flex justify-center"> <img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/libraries-sentence_transformers_snippet1.png"/> <img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/libraries-sentence_transformers_snippet1-dark.png"/> </div> <div class="flex justify-center"> <img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/libraries-sentence_transformers_snippet2.png"/> <img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/libraries-sentence_transformers_snippet2-dark.png"/> </div> ## Sharing your models You can share your Sentence Transformers by using the `save_to_hub` method from a trained model. ```py from sentence_transformers import SentenceTransformer # Load or train a model model.save_to_hub("my_new_model") ``` This command creates a repository with an automatically generated model card, an inference widget, example code snippets, and more! [Here](https://huggingface.co/osanseviero/my_new_model) is an example. ## Additional resources * Sentence Transformers [library](https://github.com/UKPLab/sentence-transformers). * Sentence Transformers [docs](https://www.sbert.net/). * Integration with Hub [announcement](https://huggingface.co/blog/sentence-transformers-in-the-hub).
huggingface/hub-docs/blob/main/docs/hub/sentence-transformers.md
How to configure OIDC SSO with Okta In this guide, we will use Okta as the SSO provider and with the Open ID Connect (OIDC) protocol as our preferred identity protocol. <Tip warning={true}> This feature is part of the <a href="https://huggingface.co/enterprise" target="_blank">Enterprise Hub</a>. </Tip> ### Step 1: Create a new application in your Identity Provider Open a new tab/window in your browser and sign in to your Okta account. Navigate to "Admin/Applications" and click the "Create App Integration" button. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/sso/sso-okta-guide-1.png"/> </div> Then choose an “OIDC - OpenID Connect” application, select the application type "Web Application" and click "Create". <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/sso/sso-okta-guide-2.png"/> </div> ### Step 2: Configure your application in Okta Open a new tab/window in your browser and navigate to the SSO section of your organization's settings. Select the OIDC protocol. <div class="flex justify-center"> <img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/sso/sso-navigation-settings.png"/> <img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/sso/sso-navigation-settings-dark.png"/> </div> <div class="flex justify-center"> <img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/sso/sso-settings.png"/> <img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/sso/sso-settings-dark.png"/> </div> Copy the "Redirection URI" from the organization's settings on Hugging Face, and paste it in the "Sign-in redirect URI" field on Okta. The URL looks like this: `https://huggingface.co/organizations/[organizationIdentifier]/oidc/consume`. You can leave the optional Sign-out redirect URIs blank. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/sso/sso-okta-guide-3.png"/> </div> Save your new application. ### Step 3: Finalize configuration on Hugging Face In your Okta application, under "General", find the following fields: - Client ID - Client secret - Issuer URL You will need these to finalize the SSO setup on Hugging Face. The Okta Issuer URL is generally a URL like `https://tenantId.okta.com`; you can refer to their [guide](https://support.okta.com/help/s/article/What-is-theIssuerlocated-under-the-OpenID-Connect-ID-Token-app-settings-used-for?language=en_US) for more details. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/sso/sso-okta-guide-4.png"/> </div> In the SSO section of your organization's settings on Hugging Face, copy-paste these values from Okta: - Client ID - Client Secret <div class="flex justify-center"> <img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/sso/sso-okta-guide-5.png"/> <img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/sso/sso-okta-guide-5-dark.png"/> </div> You can now click on "Update and Test OIDC configuration" to save the settings. You should be redirected to your SSO provider (IdP) login prompt. Once logged in, you'll be redirected to your organization's settings page. A green check mark near the OIDC selector will attest that the test was successful. <div class="flex justify-center"> <img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/sso/sso-okta-guide-6.png"/> <img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/sso/sso-okta-guide-6-dark.png"/> </div> ### Step 4: Enable SSO in your organization Now that Single Sign-On is configured and tested, you can enable it for members of your organization by clicking on the "Enable" button. Once enabled, members of your organization must complete the SSO authentication flow described in the [How does it work?](./security-sso#how-does-it-work) section.
huggingface/hub-docs/blob/main/docs/hub/security-sso-okta-oidc.md
Connecting to a Database Related spaces: https://huggingface.co/spaces/gradio/chicago-bikeshare-dashboard Tags: TABULAR, PLOTS ## Introduction This guide explains how you can use Gradio to connect your app to a database. We will be connecting to a PostgreSQL database hosted on AWS but gradio is completely agnostic to the type of database you are connecting to and where it's hosted. So as long as you can write python code to connect to your data, you can display it in a web UI with gradio 💪 ## Overview We will be analyzing bike share data from Chicago. The data is hosted on kaggle [here](https://www.kaggle.com/datasets/evangower/cyclistic-bike-share?select=202203-divvy-tripdata.csv). Our goal is to create a dashboard that will enable our business stakeholders to answer the following questions: 1. Are electric bikes more popular than regular bikes? 2. What are the top 5 most popular departure bike stations? At the end of this guide, we will have a functioning application that looks like this: <gradio-app space="gradio/chicago-bikeshare-dashboard"> </gradio-app> ## Step 1 - Creating your database We will be storing our data on a PostgreSQL hosted on Amazon's RDS service. Create an AWS account if you don't already have one and create a PostgreSQL database on the free tier. **Important**: If you plan to host this demo on HuggingFace Spaces, make sure database is on port **8080**. Spaces will block all outgoing connections unless they are made to port 80, 443, or 8080 as noted [here](https://huggingface.co/docs/hub/spaces-overview#networking). RDS will not let you create a postgreSQL instance on ports 80 or 443. Once your database is created, download the dataset from Kaggle and upload it to your database. For the sake of this demo, we will only upload March 2022 data. ## Step 2.a - Write your ETL code We will be querying our database for the total count of rides split by the type of bicycle (electric, standard, or docked). We will also query for the total count of rides that depart from each station and take the top 5. We will then take the result of our queries and visualize them in with matplotlib. We will use the pandas [read_sql](https://pandas.pydata.org/docs/reference/api/pandas.read_sql.html) method to connect to the database. This requires the `psycopg2` library to be installed. In order to connect to our database, we will specify the database username, password, and host as environment variables. This will make our app more secure by avoiding storing sensitive information as plain text in our application files. ```python import os import pandas as pd import matplotlib.pyplot as plt DB_USER = os.getenv("DB_USER") DB_PASSWORD = os.getenv("DB_PASSWORD") DB_HOST = os.getenv("DB_HOST") PORT = 8080 DB_NAME = "bikeshare" connection_string = f"postgresql://{DB_USER}:{DB_PASSWORD}@{DB_HOST}?port={PORT}&dbname={DB_NAME}" def get_count_ride_type(): df = pd.read_sql( """ SELECT COUNT(ride_id) as n, rideable_type FROM rides GROUP BY rideable_type ORDER BY n DESC """, con=connection_string ) fig_m, ax = plt.subplots() ax.bar(x=df['rideable_type'], height=df['n']) ax.set_title("Number of rides by bycycle type") ax.set_ylabel("Number of Rides") ax.set_xlabel("Bicycle Type") return fig_m def get_most_popular_stations(): df = pd.read_sql( """ SELECT COUNT(ride_id) as n, MAX(start_station_name) as station FROM RIDES WHERE start_station_name is NOT NULL GROUP BY start_station_id ORDER BY n DESC LIMIT 5 """, con=connection_string ) fig_m, ax = plt.subplots() ax.bar(x=df['station'], height=df['n']) ax.set_title("Most popular stations") ax.set_ylabel("Number of Rides") ax.set_xlabel("Station Name") ax.set_xticklabels( df['station'], rotation=45, ha="right", rotation_mode="anchor" ) ax.tick_params(axis="x", labelsize=8) fig_m.tight_layout() return fig_m ``` If you were to run our script locally, you could pass in your credentials as environment variables like so ```bash DB_USER='username' DB_PASSWORD='password' DB_HOST='host' python app.py ``` ## Step 2.c - Write your gradio app We will display or matplotlib plots in two separate `gr.Plot` components displayed side by side using `gr.Row()`. Because we have wrapped our function to fetch the data in a `demo.load()` event trigger, our demo will fetch the latest data **dynamically** from the database each time the web page loads. 🪄 ```python import gradio as gr with gr.Blocks() as demo: with gr.Row(): bike_type = gr.Plot() station = gr.Plot() demo.load(get_count_ride_type, inputs=None, outputs=bike_type) demo.load(get_most_popular_stations, inputs=None, outputs=station) demo.launch() ``` ## Step 3 - Deployment If you run the code above, your app will start running locally. You can even get a temporary shareable link by passing the `share=True` parameter to `launch`. But what if you want to a permanent deployment solution? Let's deploy our Gradio app to the free HuggingFace Spaces platform. If you haven't used Spaces before, follow the previous guide [here](/using_hugging_face_integrations). You will have to add the `DB_USER`, `DB_PASSWORD`, and `DB_HOST` variables as "Repo Secrets". You can do this in the "Settings" tab. ![secrets](https://github.com/gradio-app/gradio/blob/main/guides/assets/secrets.png?raw=true) ## Conclusion Congratulations! You know how to connect your gradio app to a database hosted on the cloud! ☁️ Our dashboard is now running on [Spaces](https://huggingface.co/spaces/gradio/chicago-bikeshare-dashboard). The complete code is [here](https://huggingface.co/spaces/gradio/chicago-bikeshare-dashboard/blob/main/app.py) As you can see, gradio gives you the power to connect to your data wherever it lives and display however you want! 🔥
gradio-app/gradio/blob/main/guides/07_tabular-data-science-and-plots/01_connecting-to-a-database.md
-- title: CUAD emoji: 🤗 colorFrom: blue colorTo: red sdk: gradio sdk_version: 3.19.1 app_file: app.py pinned: false tags: - evaluate - metric description: >- This metric wrap the official scoring script for version 1 of the Contract Understanding Atticus Dataset (CUAD). Contract Understanding Atticus Dataset (CUAD) v1 is a corpus of more than 13,000 labels in 510 commercial legal contracts that have been manually labeled to identify 41 categories of important clauses that lawyers look for when reviewing contracts in connection with corporate transactions. --- # Metric Card for CUAD ## Metric description This metric wraps the official scoring script for version 1 of the [Contract Understanding Atticus Dataset (CUAD)](https://huggingface.co/datasets/cuad), which is a corpus of more than 13,000 labels in 510 commercial legal contracts that have been manually labeled to identify 41 categories of important clauses that lawyers look for when reviewing contracts in connection with corporate transactions. The CUAD metric computes several scores: [Exact Match](https://huggingface.co/metrics/exact_match), [F1 score](https://huggingface.co/metrics/f1), Area Under the Precision-Recall Curve, [Precision](https://huggingface.co/metrics/precision) at 80% [recall](https://huggingface.co/metrics/recall) and Precision at 90% recall. ## How to use The CUAD metric takes two inputs : `predictions`, a list of question-answer dictionaries with the following key-values: - `id`: the id of the question-answer pair as given in the references. - `prediction_text`: a list of possible texts for the answer, as a list of strings depending on a threshold on the confidence probability of each prediction. `references`: a list of question-answer dictionaries with the following key-values: - `id`: the id of the question-answer pair (the same as above). - `answers`: a dictionary *in the CUAD dataset format* with the following keys: - `text`: a list of possible texts for the answer, as a list of strings. - `answer_start`: a list of start positions for the answer, as a list of ints. Note that `answer_start` values are not taken into account to compute the metric. ```python from evaluate import load cuad_metric = load("cuad") predictions = [{'prediction_text': ['The seller:', 'The buyer/End-User: Shenzhen LOHAS Supply Chain Management Co., Ltd.'], 'id': 'LohaCompanyltd_20191209_F-1_EX-10.16_11917878_EX-10.16_Supply Agreement__Parties'}] references = [{'answers': {'answer_start': [143, 49], 'text': ['The seller:', 'The buyer/End-User: Shenzhen LOHAS Supply Chain Management Co., Ltd.']}, 'id': 'LohaCompanyltd_20191209_F-1_EX-10.16_11917878_EX-10.16_Supply Agreement__Parties'}] results = cuad_metric.compute(predictions=predictions, references=references) ``` ## Output values The output of the CUAD metric consists of a dictionary that contains one or several of the following metrics: `exact_match`: The normalized answers that exactly match the reference answer, with a range between 0.0 and 1.0 (see [exact match](https://huggingface.co/metrics/exact_match) for more information). `f1`: The harmonic mean of the precision and recall (see [F1 score](https://huggingface.co/metrics/f1) for more information). Its range is between 0.0 and 1.0 -- its lowest possible value is 0, if either the precision or the recall is 0, and its highest possible value is 1.0, which means perfect precision and recall. `aupr`: The Area Under the Precision-Recall curve, with a range between 0.0 and 1.0, with a higher value representing both high recall and high precision, and a low value representing low values for both. See the [Wikipedia article](https://en.wikipedia.org/wiki/Receiver_operating_characteristic#Area_under_the_curve) for more information. `prec_at_80_recall`: The fraction of true examples among the predicted examples at a recall rate of 80%. Its range is between 0.0 and 1.0. For more information, see [precision](https://huggingface.co/metrics/precision) and [recall](https://huggingface.co/metrics/recall). `prec_at_90_recall`: The fraction of true examples among the predicted examples at a recall rate of 90%. Its range is between 0.0 and 1.0. ### Values from popular papers The [original CUAD paper](https://arxiv.org/pdf/2103.06268.pdf) reports that a [DeBERTa model](https://huggingface.co/microsoft/deberta-base) attains an AUPR of 47.8%, a Precision at 80% Recall of 44.0%, and a Precision at 90% Recall of 17.8% (they do not report F1 or Exact Match separately). For more recent model performance, see the [dataset leaderboard](https://paperswithcode.com/dataset/cuad). ## Examples Maximal values : ```python from evaluate import load cuad_metric = load("cuad") predictions = [{'prediction_text': ['The seller:', 'The buyer/End-User: Shenzhen LOHAS Supply Chain Management Co., Ltd.'], 'id': 'LohaCompanyltd_20191209_F-1_EX-10.16_11917878_EX-10.16_Supply Agreement__Parties'}] references = [{'answers': {'answer_start': [143, 49], 'text': ['The seller:', 'The buyer/End-User: Shenzhen LOHAS Supply Chain Management Co., Ltd.']}, 'id': 'LohaCompanyltd_20191209_F-1_EX-10.16_11917878_EX-10.16_Supply Agreement__Parties'}] results = cuad_metric.compute(predictions=predictions, references=references) print(results) {'exact_match': 100.0, 'f1': 100.0, 'aupr': 0.0, 'prec_at_80_recall': 1.0, 'prec_at_90_recall': 1.0} ``` Minimal values: ```python from evaluate import load cuad_metric = load("cuad") predictions = [{'prediction_text': ['The Company appoints the Distributor as an exclusive distributor of Products in the Market, subject to the terms and conditions of this Agreement.'], 'id': 'LIMEENERGYCO_09_09_1999-EX-10-DISTRIBUTOR AGREEMENT__Exclusivity_0'}] references = [{'answers': {'answer_start': [143], 'text': 'The seller'}, 'id': 'LIMEENERGYCO_09_09_1999-EX-10-DISTRIBUTOR AGREEMENT__Exclusivity_0'}] results = cuad_metric.compute(predictions=predictions, references=references) print(results) {'exact_match': 0.0, 'f1': 0.0, 'aupr': 0.0, 'prec_at_80_recall': 0, 'prec_at_90_recall': 0} ``` Partial match: ```python from evaluate import load cuad_metric = load("cuad") predictions = [{'prediction_text': ['The seller:', 'The buyer/End-User: Shenzhen LOHAS Supply Chain Management Co., Ltd.'], 'id': 'LohaCompanyltd_20191209_F-1_EX-10.16_11917878_EX-10.16_Supply Agreement__Parties'}] predictions = [{'prediction_text': ['The Company appoints the Distributor as an exclusive distributor of Products in the Market, subject to the terms and conditions of this Agreement.', 'The buyer/End-User: Shenzhen LOHAS Supply Chain Management Co., Ltd.'], 'id': 'LohaCompanyltd_20191209_F-1_EX-10.16_11917878_EX-10.16_Supply Agreement__Parties'}] results = cuad_metric.compute(predictions=predictions, references=references) print(results) {'exact_match': 100.0, 'f1': 50.0, 'aupr': 0.0, 'prec_at_80_recall': 0, 'prec_at_90_recall': 0} ``` ## Limitations and bias This metric works only with datasets that have the same format as the [CUAD dataset](https://huggingface.co/datasets/cuad). The limitations of the biases of this dataset are not discussed, but could exhibit annotation bias given the homogeneity of annotators for this dataset. In terms of the metric itself, the accuracy of AUPR has been debated because its estimates are quite noisy and because of the fact that reducing the Precision-Recall Curve to a single number ignores the fact that it is about the tradeoffs between the different systems or performance points plotted and not the performance of an individual system. Reporting the original F1 and exact match scores is therefore useful to ensure a more complete representation of system performance. ## Citation ```bibtex @article{hendrycks2021cuad, title={CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review}, author={Dan Hendrycks and Collin Burns and Anya Chen and Spencer Ball}, journal={arXiv preprint arXiv:2103.06268}, year={2021} } ``` ## Further References - [CUAD dataset homepage](https://www.atticusprojectai.org/cuad-v1-performance-announcements)
huggingface/evaluate/blob/main/metrics/cuad/README.md
Gradio Demo: image_classifier_2 ``` !pip install -q gradio pillow torch torchvision ``` ``` # Downloading files from the demo repo import os os.mkdir('files') !wget -q -O files/imagenet_labels.json https://github.com/gradio-app/gradio/raw/main/demo/image_classifier_2/files/imagenet_labels.json ``` ``` import requests import torch from PIL import Image from torchvision import transforms import gradio as gr model = torch.hub.load("pytorch/vision:v0.6.0", "resnet18", pretrained=True).eval() # Download human-readable labels for ImageNet. response = requests.get("https://git.io/JJkYN") labels = response.text.split("\n") def predict(inp): inp = Image.fromarray(inp.astype("uint8"), "RGB") inp = transforms.ToTensor()(inp).unsqueeze(0) with torch.no_grad(): prediction = torch.nn.functional.softmax(model(inp)[0], dim=0) return {labels[i]: float(prediction[i]) for i in range(1000)} inputs = gr.Image() outputs = gr.Label(num_top_classes=3) demo = gr.Interface(fn=predict, inputs=inputs, outputs=outputs) if __name__ == "__main__": demo.launch() ```
gradio-app/gradio/blob/main/demo/image_classifier_2/run.ipynb
!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # Würstchen <img src="https://github.com/dome272/Wuerstchen/assets/61938694/0617c863-165a-43ee-9303-2a17299a0cf9"> [Wuerstchen: An Efficient Architecture for Large-Scale Text-to-Image Diffusion Models](https://huggingface.co/papers/2306.00637) is by Pablo Pernias, Dominic Rampas, Mats L. Richter and Christopher Pal and Marc Aubreville. The abstract from the paper is: *We introduce Würstchen, a novel architecture for text-to-image synthesis that combines competitive performance with unprecedented cost-effectiveness for large-scale text-to-image diffusion models. A key contribution of our work is to develop a latent diffusion technique in which we learn a detailed but extremely compact semantic image representation used to guide the diffusion process. This highly compressed representation of an image provides much more detailed guidance compared to latent representations of language and this significantly reduces the computational requirements to achieve state-of-the-art results. Our approach also improves the quality of text-conditioned image generation based on our user preference study. The training requirements of our approach consists of 24,602 A100-GPU hours - compared to Stable Diffusion 2.1's 200,000 GPU hours. Our approach also requires less training data to achieve these results. Furthermore, our compact latent representations allows us to perform inference over twice as fast, slashing the usual costs and carbon footprint of a state-of-the-art (SOTA) diffusion model significantly, without compromising the end performance. In a broader comparison against SOTA models our approach is substantially more efficient and compares favorably in terms of image quality. We believe that this work motivates more emphasis on the prioritization of both performance and computational accessibility.* ## Würstchen Overview Würstchen is a diffusion model, whose text-conditional model works in a highly compressed latent space of images. Why is this important? Compressing data can reduce computational costs for both training and inference by magnitudes. Training on 1024x1024 images is way more expensive than training on 32x32. Usually, other works make use of a relatively small compression, in the range of 4x - 8x spatial compression. Würstchen takes this to an extreme. Through its novel design, we achieve a 42x spatial compression. This was unseen before because common methods fail to faithfully reconstruct detailed images after 16x spatial compression. Würstchen employs a two-stage compression, what we call Stage A and Stage B. Stage A is a VQGAN, and Stage B is a Diffusion Autoencoder (more details can be found in the [paper](https://huggingface.co/papers/2306.00637)). A third model, Stage C, is learned in that highly compressed latent space. This training requires fractions of the compute used for current top-performing models, while also allowing cheaper and faster inference. ## Würstchen v2 comes to Diffusers After the initial paper release, we have improved numerous things in the architecture, training and sampling, making Würstchen competitive to current state-of-the-art models in many ways. We are excited to release this new version together with Diffusers. Here is a list of the improvements. - Higher resolution (1024x1024 up to 2048x2048) - Faster inference - Multi Aspect Resolution Sampling - Better quality We are releasing 3 checkpoints for the text-conditional image generation model (Stage C). Those are: - v2-base - v2-aesthetic - **(default)** v2-interpolated (50% interpolation between v2-base and v2-aesthetic) We recommend using v2-interpolated, as it has a nice touch of both photorealism and aesthetics. Use v2-base for finetunings as it does not have a style bias and use v2-aesthetic for very artistic generations. A comparison can be seen here: <img src="https://github.com/dome272/Wuerstchen/assets/61938694/2914830f-cbd3-461c-be64-d50734f4b49d" width=500> ## Text-to-Image Generation For the sake of usability, Würstchen can be used with a single pipeline. This pipeline can be used as follows: ```python import torch from diffusers import AutoPipelineForText2Image from diffusers.pipelines.wuerstchen import DEFAULT_STAGE_C_TIMESTEPS pipe = AutoPipelineForText2Image.from_pretrained("warp-ai/wuerstchen", torch_dtype=torch.float16).to("cuda") caption = "Anthropomorphic cat dressed as a fire fighter" images = pipe( caption, width=1024, height=1536, prior_timesteps=DEFAULT_STAGE_C_TIMESTEPS, prior_guidance_scale=4.0, num_images_per_prompt=2, ).images ``` For explanation purposes, we can also initialize the two main pipelines of Würstchen individually. Würstchen consists of 3 stages: Stage C, Stage B, Stage A. They all have different jobs and work only together. When generating text-conditional images, Stage C will first generate the latents in a very compressed latent space. This is what happens in the `prior_pipeline`. Afterwards, the generated latents will be passed to Stage B, which decompresses the latents into a bigger latent space of a VQGAN. These latents can then be decoded by Stage A, which is a VQGAN, into the pixel-space. Stage B & Stage A are both encapsulated in the `decoder_pipeline`. For more details, take a look at the [paper](https://huggingface.co/papers/2306.00637). ```python import torch from diffusers import WuerstchenDecoderPipeline, WuerstchenPriorPipeline from diffusers.pipelines.wuerstchen import DEFAULT_STAGE_C_TIMESTEPS device = "cuda" dtype = torch.float16 num_images_per_prompt = 2 prior_pipeline = WuerstchenPriorPipeline.from_pretrained( "warp-ai/wuerstchen-prior", torch_dtype=dtype ).to(device) decoder_pipeline = WuerstchenDecoderPipeline.from_pretrained( "warp-ai/wuerstchen", torch_dtype=dtype ).to(device) caption = "Anthropomorphic cat dressed as a fire fighter" negative_prompt = "" prior_output = prior_pipeline( prompt=caption, height=1024, width=1536, timesteps=DEFAULT_STAGE_C_TIMESTEPS, negative_prompt=negative_prompt, guidance_scale=4.0, num_images_per_prompt=num_images_per_prompt, ) decoder_output = decoder_pipeline( image_embeddings=prior_output.image_embeddings, prompt=caption, negative_prompt=negative_prompt, guidance_scale=0.0, output_type="pil", ).images[0] decoder_output ``` ## Speed-Up Inference You can make use of `torch.compile` function and gain a speed-up of about 2-3x: ```python prior_pipeline.prior = torch.compile(prior_pipeline.prior, mode="reduce-overhead", fullgraph=True) decoder_pipeline.decoder = torch.compile(decoder_pipeline.decoder, mode="reduce-overhead", fullgraph=True) ``` ## Limitations - Due to the high compression employed by Würstchen, generations can lack a good amount of detail. To our human eye, this is especially noticeable in faces, hands etc. - **Images can only be generated in 128-pixel steps**, e.g. the next higher resolution after 1024x1024 is 1152x1152 - The model lacks the ability to render correct text in images - The model often does not achieve photorealism - Difficult compositional prompts are hard for the model The original codebase, as well as experimental ideas, can be found at [dome272/Wuerstchen](https://github.com/dome272/Wuerstchen). ## WuerstchenCombinedPipeline [[autodoc]] WuerstchenCombinedPipeline - all - __call__ ## WuerstchenPriorPipeline [[autodoc]] WuerstchenPriorPipeline - all - __call__ ## WuerstchenPriorPipelineOutput [[autodoc]] pipelines.wuerstchen.pipeline_wuerstchen_prior.WuerstchenPriorPipelineOutput ## WuerstchenDecoderPipeline [[autodoc]] WuerstchenDecoderPipeline - all - __call__ ## Citation ```bibtex @misc{pernias2023wuerstchen, title={Wuerstchen: An Efficient Architecture for Large-Scale Text-to-Image Diffusion Models}, author={Pablo Pernias and Dominic Rampas and Mats L. Richter and Christopher J. Pal and Marc Aubreville}, year={2023}, eprint={2306.00637}, archivePrefix={arXiv}, primaryClass={cs.CV} } ```
huggingface/diffusers/blob/main/docs/source/en/api/pipelines/wuerstchen.md
aving and reloading a dataset. In this video we'll take a look saving a dataset in various formats, and explore the ways to reload the saved data. When you download a dataset, the processing scripts and data are stored locally on your computer. The cache allows the Datasets library to avoid re-downloading or processing the entire dataset every time you use it. The data is stored in the form of Arrow tables whose location can be found by accessing the dataset's cache_files attribute. In this example, we've downloaded the allocine dataset from the Hugging Face Hub and you can see there are three Arrow files stored in the cache, one for each split. But in many cases, you'll want to save your dataset in a different location or format. As shown in the table, the Datasets library provides four main functions to achieve this. You're probably familiar with the CSV and JSON formats, both of which are great if you want to save small to medium-sized datasets. But if your dataset is huge, you'll want to save it in either the Arrow or Parquet formats. Arrow files are great if you plan to reload or process the data in the near future. Parquet files are designed for long-term disk storage and are very space efficient. Let's take a closer look at each format. To save a Dataset or a DatasetDict object in the Arrow format we use the save_to_disk function. As you can see in this example, we simply provide the path we wish to save the data to, and the Datasets library will automatically create a directory for each split to store the Arrow table and metadata. Since we're dealing with a DatasetDict object that has multiple splits, this information is also stored in the dataset_dict.json file. Now when we want to reload the Arrow datasets, we use the load_from_disk function. We simply pass the path of our dataset directory and voila the original dataset is recovered! If we want to save our datasets in the CSV format we use the to_csv function. In this case you'll need to loop over the splits of the DatasetDict object and save each dataset as an individual CSV file. Since the to_csv file is based on the one from Pandas, you can pass keyword arguments to configure the output. In this example, we've set the index argument to None to prevent the dataset's index column from being included in the CSV files. To reload our CSV files, we use the load_dataset function together with the csv loading script and data_files argument which specifies the filenames associated with each split. As you can see in this example, by providing all the splits and their filenames, we've recovered the original DatasetDict object. To save a dataset in the JSON or Parquet formats is very similar to the CSV case. We use either the to_json function for JSON files or the to_parquet function for Parquet ones. And just like the CSV case, we need to loop over the splits and save each one as an individual file. Once our datasets are saved as JSON or Parquet files, we can reload them again with the appropriate script in the load_dataset function, and a data_files argument as before. This example shows how we can reload our saved datasets in either format. And with that you now know how to save your datasets in various formats!
huggingface/course/blob/main/subtitles/en/raw/chapter5/03c_save-reload.md
!--- Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # Text classification ## GLUE tasks The script [`run_glue.py`](https://github.com/huggingface/optimum/blob/main/examples/onnxruntime/optimization/text-classification/run_glue.py) allows us to apply graph optimizations and fusion using [ONNX Runtime](https://github.com/microsoft/onnxruntime) for sequence classification tasks such as the ones from the [GLUE benchmark](https://gluebenchmark.com/). The following example applies graph optimization on a DistilBERT fine-tuned on the sst-2 task. Here the optimization level is selected to be 1, enabling basic optimizations such as redundant node eliminations and constant folding. Higher optimization level will result in hardware dependent optimized graph. ```bash python run_glue.py \ --model_name_or_path distilbert-base-uncased-finetuned-sst-2-english \ --task_name sst2 \ --optimization_level 1 \ --do_eval \ --output_dir /tmp/optimized_distilbert_sst2 ```
huggingface/optimum/blob/main/examples/onnxruntime/optimization/text-classification/README.md
!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Tuners A tuner (or adapter) is a module that can be plugged into a `torch.nn.Module`. [`BaseTuner`] base class for other tuners and provides shared methods and attributes for preparing an adapter configuration and replacing a target module with the adapter module. [`BaseTunerLayer`] is a base class for adapter layers. It offers methods and attributes for managing adapters such as activating and disabling adapters. ## BaseTuner [[autodoc]] tuners.tuners_utils.BaseTuner ## BaseTunerLayer [[autodoc]] tuners.tuners_utils.BaseTunerLayer
huggingface/peft/blob/main/docs/source/package_reference/tuners.md
!--- Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # Image classification examples This directory contains 2 scripts that showcase how to fine-tune any model supported by the [`TFAutoModelForImageClassification` API](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.TFAutoModelForImageClassification) (such as [ViT](https://huggingface.co/docs/transformers/main/en/model_doc/vit), [ConvNeXT](https://huggingface.co/docs/transformers/main/en/model_doc/convnext), [ResNet](https://huggingface.co/docs/transformers/main/en/model_doc/resnet), [Swin Transformer](https://huggingface.co/docs/transformers/main/en/model_doc/swin)...) using TensorFlow. They can be used to fine-tune models on both [datasets from the hub](#using-datasets-from-hub) as well as on [your own custom data](#using-your-own-data). <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/image_classification_inference_widget.png" height="400" /> Try out the inference widget here: https://huggingface.co/google/vit-base-patch16-224 ## TensorFlow Based on the script [`run_image_classification.py`](https://github.com/huggingface/transformers/blob/main/examples/tensorflow/image-classification/run_image_classification.py). ### Using datasets from Hub Here we show how to fine-tune a Vision Transformer (`ViT`) on the [beans](https://huggingface.co/datasets/beans) dataset, to classify the disease type of bean leaves. The following will train a model and push it to the `amyeroberts/vit-base-beans` repo. ```bash python run_image_classification.py \ --dataset_name beans \ --output_dir ./beans_outputs/ \ --remove_unused_columns False \ --do_train \ --do_eval \ --push_to_hub \ --hub_model_id amyeroberts/vit-base-beans \ --learning_rate 2e-5 \ --num_train_epochs 5 \ --per_device_train_batch_size 8 \ --per_device_eval_batch_size 8 \ --logging_strategy steps \ --logging_steps 10 \ --evaluation_strategy epoch \ --save_strategy epoch \ --load_best_model_at_end True \ --save_total_limit 3 \ --seed 1337 ``` 👀 See the results here: [amyeroberts/vit-base-beans](https://huggingface.co/amyeroberts/vit-base-beans). Note that you can replace the model and dataset by simply setting the `model_name_or_path` and `dataset_name` arguments respectively, with any model or dataset from the [hub](https://huggingface.co/). For an overview of all possible arguments, we refer to the [docs](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments) of the `TrainingArguments`, which can be passed as flags. > If your model classification head dimensions do not fit the number of labels in the dataset, you can specify `--ignore_mismatched_sizes` to adapt it. ### Using your own data To use your own dataset, there are 2 ways: - you can either provide your own folders as `--train_dir` and/or `--validation_dir` arguments - you can upload your dataset to the hub (possibly as a private repo, if you prefer so), and simply pass the `--dataset_name` argument. Below, we explain both in more detail. #### Provide them as folders If you provide your own folders with images, the script expects the following directory structure: ```bash root/dog/xxx.png root/dog/xxy.png root/dog/[...]/xxz.png root/cat/123.png root/cat/nsdf3.png root/cat/[...]/asd932_.png ``` In other words, you need to organize your images in subfolders, based on their class. You can then run the script like this: ```bash python run_image_classification.py \ --train_dir <path-to-train-root> \ --output_dir ./outputs/ \ --remove_unused_columns False \ --do_train \ --do_eval ``` Internally, the script will use the [`ImageFolder`](https://huggingface.co/docs/datasets/v2.0.0/en/image_process#imagefolder) feature which will automatically turn the folders into 🤗 Dataset objects. ##### 💡 The above will split the train dir into training and evaluation sets - To control the split amount, use the `--train_val_split` flag. - To provide your own validation split in its own directory, you can pass the `--validation_dir <path-to-val-root>` flag. #### Upload your data to the hub, as a (possibly private) repo To upload your image dataset to the hub you can use the [`ImageFolder`](https://huggingface.co/docs/datasets/v2.0.0/en/image_process#imagefolder) feature available in 🤗 Datasets. Simply do the following: ```python from datasets import load_dataset # example 1: local folder dataset = load_dataset("imagefolder", data_dir="path_to_your_folder") # example 2: local files (suppoted formats are tar, gzip, zip, xz, rar, zstd) dataset = load_dataset("imagefolder", data_files="path_to_zip_file") # example 3: remote files (suppoted formats are tar, gzip, zip, xz, rar, zstd) dataset = load_dataset("imagefolder", data_files="https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip") # example 4: providing several splits dataset = load_dataset("imagefolder", data_files={"train": ["path/to/file1", "path/to/file2"], "test": ["path/to/file3", "path/to/file4"]}) ``` `ImageFolder` will create a `label` column, and the label name is based on the directory name. Next, push it to the hub! ```python # assuming you have ran the huggingface-cli login command in a terminal dataset.push_to_hub("name_of_your_dataset") # if you want to push to a private repo, simply pass private=True: dataset.push_to_hub("name_of_your_dataset", private=True) ``` and that's it! You can now train your model by simply setting the `--dataset_name` argument to the name of your dataset on the hub (as explained in [Using datasets from the 🤗 hub](#using-datasets-from-hub)). More on this can also be found in [this blog post](https://huggingface.co/blog/image-search-datasets). ### Sharing your model on 🤗 Hub 0. If you haven't already, [sign up](https://huggingface.co/join) for a 🤗 account 1. Make sure you have `git-lfs` installed and git set up. ```bash $ apt install git-lfs $ git config --global user.email "you@example.com" $ git config --global user.name "Your Name" ``` 2. Log in with your HuggingFace account credentials using `huggingface-cli`: ```bash $ huggingface-cli login # ...follow the prompts ``` 3. When running the script, pass the following arguments: ```bash python run_image_classification.py \ --push_to_hub \ --push_to_hub_model_id <name-your-model> \ ... ```
huggingface/transformers/blob/main/examples/tensorflow/image-classification/README.md
ere is how to convert a GPT2 model generated outside of `transformers` * [Megatron-LM](https://github.com/NVIDIA/Megatron-LM)-generated model: Use [convert_megatron_gpt2_checkpoint.py](../megatron_gpt2/convert_megatron_gpt2_checkpoint.py) * [big-science fork of Megatron-Deepspeed](https://github.com/bigscience-workshop/Megatron-DeepSpeed/)-generated model: Use the instructions [here](https://github.com/bigscience-workshop/bigscience/tree/aa872e754106f6678e8a9dac8c6962404ba39a6d/train/tr1-13B-base#checkpoint-conversion-and-upload). This approach uses a set of scripts that require the use of this particular fork of Megatron-Deepspeed.
huggingface/transformers/blob/main/src/transformers/models/gpt2/CONVERSION.md
Gradio Demo: scatter_plot ``` !pip install -q gradio vega_datasets pandas ``` ``` import gradio as gr from vega_datasets import data cars = data.cars() iris = data.iris() # # Or generate your own fake data # import pandas as pd # import random # cars_data = { # "Name": ["car name " + f" {int(i/10)}" for i in range(400)], # "Miles_per_Gallon": [random.randint(10, 30) for _ in range(400)], # "Origin": [random.choice(["USA", "Europe", "Japan"]) for _ in range(400)], # "Horsepower": [random.randint(50, 250) for _ in range(400)], # } # iris_data = { # "petalWidth": [round(random.uniform(0, 2.5), 2) for _ in range(150)], # "petalLength": [round(random.uniform(0, 7), 2) for _ in range(150)], # "species": [ # random.choice(["setosa", "versicolor", "virginica"]) for _ in range(150) # ], # } # cars = pd.DataFrame(cars_data) # iris = pd.DataFrame(iris_data) def scatter_plot_fn(dataset): if dataset == "iris": return gr.ScatterPlot( value=iris, x="petalWidth", y="petalLength", color="species", title="Iris Dataset", color_legend_title="Species", x_title="Petal Width", y_title="Petal Length", tooltip=["petalWidth", "petalLength", "species"], caption="", ) else: return gr.ScatterPlot( value=cars, x="Horsepower", y="Miles_per_Gallon", color="Origin", tooltip="Name", title="Car Data", y_title="Miles per Gallon", color_legend_title="Origin of Car", caption="MPG vs Horsepower of various cars", ) with gr.Blocks() as scatter_plot: with gr.Row(): with gr.Column(): dataset = gr.Dropdown(choices=["cars", "iris"], value="cars") with gr.Column(): plot = gr.ScatterPlot() dataset.change(scatter_plot_fn, inputs=dataset, outputs=plot) scatter_plot.load(fn=scatter_plot_fn, inputs=dataset, outputs=plot) if __name__ == "__main__": scatter_plot.launch() ```
gradio-app/gradio/blob/main/demo/scatter_plot/run.ipynb
Changelog All notable changes to this project will be documented in this file. The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). ## [0.13.2] - [#1096] Python 3.11 support ## [0.13.1] - [#1072] Fixing Roberta type ids. ## [0.13.0] - [#956] PyO3 version upgrade - [#1055] M1 automated builds - [#1008] `Decoder` is now a composable trait, but without being backward incompatible - [#1047, #1051, #1052] `Processor` is now a composable trait, but without being backward incompatible Both trait changes warrant a "major" number since, despite best efforts to not break backward compatibility, the code is different enough that we cannot be exactly sure. ## [0.12.1] - [#938] **Reverted breaking change**. https://github.com/huggingface/transformers/issues/16520 ## [0.12.0] YANKED Bump minor version because of a breaking change. - [#938] [REVERTED IN 0.12.1] **Breaking change**. Decoder trait is modified to be composable. This is only breaking if you are using decoders on their own. tokenizers should be error free. - [#939] Making the regex in `ByteLevel` pre_tokenizer optional (necessary for BigScience) - [#952] Fixed the vocabulary size of UnigramTrainer output (to respect added tokens) - [#954] Fixed not being able to save vocabularies with holes in vocab (ConvBert). Yell warnings instead, but stop panicking. - [#962] Fix tests for python 3.10 - [#961] Added link for Ruby port of `tokenizers` ## [0.11.6] - [#919] Fixing single_word AddedToken. (regression from 0.11.2) - [#916] Deserializing faster `added_tokens` by loading them in batch. ## [0.11.5] - [#895] Build `python 3.10` wheels. ## [0.11.4] - [#884] Fixing bad deserialization following inclusion of a default for Punctuation ## [0.11.3] - [#882] Fixing Punctuation deserialize without argument. - [#868] Fixing missing direction in TruncationParams - [#860] Adding TruncationSide to TruncationParams ## [0.11.0] ### Fixed - [#585] Conda version should now work on old CentOS - [#844] Fixing interaction between `is_pretokenized` and `trim_offsets`. - [#851] Doc links ### Added - [#657]: Add SplitDelimiterBehavior customization to Punctuation constructor - [#845]: Documentation for `Decoders`. ### Changed - [#850]: Added a feature gate to enable disabling `http` features - [#718]: Fix `WordLevel` tokenizer determinism during training - [#762]: Add a way to specify the unknown token in `SentencePieceUnigramTokenizer` - [#770]: Improved documentation for `UnigramTrainer` - [#780]: Add `Tokenizer.from_pretrained` to load tokenizers from the Hugging Face Hub - [#793]: Saving a pretty JSON file by default when saving a tokenizer ## [0.10.3] ### Fixed - [#686]: Fix SPM conversion process for whitespace deduplication - [#707]: Fix stripping strings containing Unicode characters ### Added - [#693]: Add a CTC Decoder for Wave2Vec models ### Removed - [#714]: Removed support for Python 3.5 ## [0.10.2] ### Fixed - [#652]: Fix offsets for `Precompiled` corner case - [#656]: Fix BPE `continuing_subword_prefix` - [#674]: Fix `Metaspace` serialization problems ## [0.10.1] ### Fixed - [#616]: Fix SentencePiece tokenizers conversion - [#617]: Fix offsets produced by Precompiled Normalizer (used by tokenizers converted from SPM) - [#618]: Fix Normalizer.normalize with `PyNormalizedStringRefMut` - [#620]: Fix serialization/deserialization for overlapping models - [#621]: Fix `ByteLevel` instantiation from a previously saved state (using `__getstate__()`) ## [0.10.0] ### Added - [#508]: Add a Visualizer for notebooks to help understand how the tokenizers work - [#519]: Add a `WordLevelTrainer` used to train a `WordLevel` model - [#533]: Add support for conda builds - [#542]: Add Split pre-tokenizer to easily split using a pattern - [#544]: Ability to train from memory. This also improves the integration with `datasets` - [#590]: Add getters/setters for components on BaseTokenizer - [#574]: Add `fust_unk` option to SentencePieceBPETokenizer ### Changed - [#509]: Automatically stubbing the `.pyi` files - [#519]: Each `Model` can return its associated `Trainer` with `get_trainer()` - [#530]: The various attributes on each component can be get/set (ie. `tokenizer.model.dropout = 0.1`) - [#538]: The API Reference has been improved and is now up-to-date. ### Fixed - [#519]: During training, the `Model` is now trained in-place. This fixes several bugs that were forcing to reload the `Model` after a training. - [#539]: Fix `BaseTokenizer` enable_truncation docstring ## [0.9.4] ### Fixed - [#492]: Fix `from_file` on `BertWordPieceTokenizer` - [#498]: Fix the link to download `sentencepiece_model_pb2.py` - [#500]: Fix a typo in the docs quicktour ### Changed - [#506]: Improve Encoding mappings for pairs of sequence ## [0.9.3] ### Fixed - [#470]: Fix hanging error when training with custom component - [#476]: TemplateProcessing serialization is now deterministic - [#481]: Fix SentencePieceBPETokenizer.from_files ### Added - [#477]: UnicodeScripts PreTokenizer to avoid merges between various scripts - [#480]: Unigram now accepts an `initial_alphabet` and handles `special_tokens` correctly ## [0.9.2] ### Fixed - [#464]: Fix a problem with RobertaProcessing being deserialized as BertProcessing ## [0.9.1] ### Fixed - [#459]: Fix a problem with deserialization ## [0.9.0] ### Fixed - [#362]: Fix training deadlock with Python components. - [#363]: Fix a crash when calling `.train` with some non-existent files - [#355]: Remove a lot of possible crashes - [#389]: Improve truncation (crash and consistency) ### Added - [#379]: Add the ability to call `encode`/`encode_batch` with numpy arrays - [#292]: Support for the Unigram algorithm - [#378], [#394], [#416], [#417]: Many new Normalizer and PreTokenizer - [#403]: Add `TemplateProcessing` `PostProcessor`. - [#420]: Ability to fuse the "unk" token in BPE. ### Changed - [#360]: Lots of improvements related to words/alignment tracking - [#426]: Improvements on error messages thanks to PyO3 0.12 ## [0.8.1] ### Fixed - [#333]: Fix deserialization of `AddedToken`, where the content was not restored properly ### Changed - [#329]: Improved warning and behavior when we detect a fork - [#330]: BertNormalizer now keeps the same behavior than the original implementation when `strip_accents` is not specified. ## [0.8.0] ### Highlights of this release - We can now encode both pre-tokenized inputs, and raw strings. This is especially usefull when processing datasets that are already pre-tokenized like for NER (Name Entity Recognition), and helps while applying labels to each word. - Full tokenizer serialization. It is now easy to save a tokenizer to a single JSON file, to later load it back with just one line of code. That's what sharing a Tokenizer means now: 1 line of code. - With the serialization comes the compatibility with `Pickle`! The Tokenizer, all of its components, Encodings, everything can be pickled! - Training a tokenizer is now even faster (up to 5-10x) than before! - Compatibility with `multiprocessing`, even when using the `fork` start method. Since this library makes heavy use of the multithreading capacities of our computers to allows a very fast tokenization, this led to problems (deadlocks) when used with `multiprocessing`. This version now allows to disable the parallelism, and will warn you if this is necessary. - And a lot of other improvements, and fixes. ### Fixed - [#286]: Fix various crash when training a BPE model - [#309]: Fixed a few bugs related to additional vocabulary/tokens ### Added - [#272]: Serialization of the `Tokenizer` and all the parts (`PreTokenizer`, `Normalizer`, ...). This adds some methods to easily save/load an entire tokenizer (`from_str`, `from_file`). - [#273]: `Tokenizer` and its parts are now pickable - [#289]: Ability to pad to a multiple of a specified value. This is especially useful to ensure activation of the Tensor Cores, while ensuring padding to a multiple of 8. Use with `enable_padding(pad_to_multiple_of=8)` for example. - [#298]: Ability to get the currently set truncation/padding params - [#311]: Ability to enable/disable the parallelism using the `TOKENIZERS_PARALLELISM` environment variable. This is especially usefull when using `multiprocessing` capabilities, with the `fork` start method, which happens to be the default on Linux systems. Without disabling the parallelism, the process dead-locks while encoding. (Cf [#187] for more information) ### Changed - Improved errors generated during truncation: When the provided max length is too low are now handled properly. - [#249] `encode` and `encode_batch` now accept pre-tokenized inputs. When the input is pre-tokenized, the argument `is_pretokenized=True` must be specified. - [#276]: Improve BPE training speeds, by reading files sequentially, but parallelizing the processing of each file - [#280]: Use `onig` for byte-level pre-tokenization to remove all the differences with the original implementation from GPT-2 - [#309]: Improved the management of the additional vocabulary. This introduces an option `normalized`, controlling whether a token should be extracted from the normalized version of the input text. ## [0.7.0] ### Changed - Only one progress bar while reading files during training. This is better for use-cases with a high number of files as it avoids having too many progress bars on screen. Also avoids reading the size of each file before starting to actually read these files, as this process could take really long. - [#193]: `encode` and `encode_batch` now take a new optional argument, specifying whether we should add the special tokens. This is activated by default. - [#197]: `original_str` and `normalized_str` have been removed from the `Encoding` returned by `encode` and `encode_batch`. This brings a reduction of 70% of the memory footprint. - [#197]: The offsets provided on `Encoding` are now relative to the original string, and not the normalized one anymore. - The added token given to `add_special_tokens` or `add_tokens` on a `Tokenizer`, or while using `train(special_tokens=...)` can now be instances of `AddedToken` to provide more control over these tokens. - [#136]: Updated Pyo3 version - [#136]: Static methods `Model.from_files` and `Model.empty` are removed in favor of using constructors. - [#239]: `CharBPETokenizer` now corresponds to OpenAI GPT BPE implementation by default. ### Added - [#188]: `ByteLevel` is also a `PostProcessor` now and handles trimming the offsets if activated. This avoids the unintuitive inclusion of the whitespaces in the produced offsets, even if these whitespaces are part of the actual token. It has been added to `ByteLevelBPETokenizer` but it is off by default (`trim_offsets=False`). - [#236]: `RobertaProcessing` also handles trimming the offsets. - [#234]: New alignment mappings on the `Encoding`. Provide methods to easily convert between `char` or `word` (input space) and `token` (output space). - `post_process` can be called on the `Tokenizer` - [#208]: Ability to retrieve the vocabulary from the `Tokenizer` with `get_vocab(with_added_tokens: bool)` - [#136] Models can now be instantiated through object constructors. ### Fixed - [#193]: Fix some issues with the offsets being wrong with the `ByteLevel` BPE: - when `add_prefix_space=True` - [#156]: when a Unicode character gets split-up in multiple byte-level characters - Fix a bug where offsets were wrong when there was any added tokens in the sequence being encoded. - [#175]: Fix a bug that prevented the addition of more than a certain amount of tokens (even if not advised, but that's not the question). - [#205]: Trim the decoded string in `BPEDecoder` used by `CharBPETokenizer` ### How to migrate - Add the `ByteLevel` `PostProcessor` to your byte-level BPE tokenizers if relevant. If you are using `ByteLevelBPETokenizer`, this option is disabled by default (`trim_offsets=False`). - `BertWordPieceTokenizer` option to `add_special_tokens` must now be given to `encode` or `encode_batch` - Access to the `original_str` on the `Encoding` has been removed. The original string is the input of `encode` so it didn't make sense to keep it here. - No need to call `original_str.offsets(offsets[N])` to convert offsets to the original string. They are now relative to the original string by default. - Access to the `normalized_str` on the `Encoding` has been removed. Can be retrieved by calling `normalize(sequence)` on the `Tokenizer` - Change `Model.from_files` and `Model.empty` to use constructor. The model constructor should take the same arguments as the old methods. (ie `BPE(vocab, merges)` or `BPE()`) - If you were using the `CharBPETokenizer` and want to keep the same behavior as before, set `bert_normalizer=False` and `split_on_whitespace_only=True`. ## [0.6.0] ### Changed - [#165]: Big improvements in speed for BPE (Both training and tokenization) ### Fixed - [#160]: Some default tokens were missing from `BertWordPieceTokenizer` - [#156]: There was a bug in ByteLevel PreTokenizer that caused offsets to be wrong if a char got split up in multiple bytes. - [#174]: The `longest_first` truncation strategy had a bug ## [0.5.2] - [#163]: Do not open all files directly while training ### Fixed - We introduced a bug related to the saving of the WordPiece model in 0.5.1: The `vocab.txt` file was named `vocab.json`. This is now fixed. - The `WordLevel` model was also saving its vocabulary to the wrong format. ## [0.5.1] ### Changed - `name` argument is now optional when saving a `Model`'s vocabulary. When the name is not specified, the files get a more generic naming, like `vocab.json` or `merges.txt`. ## [0.5.0] ### Changed - [#145]: `BertWordPieceTokenizer` now cleans up some tokenization artifacts while decoding - [#149]: `ByteLevelBPETokenizer` now has `dropout`. - `do_lowercase` has been changed to `lowercase` for consistency between the different tokenizers. (Especially `ByteLevelBPETokenizer` and `CharBPETokenizer`) - [#139]: Expose `__len__` on `Encoding` - Improved padding performances. ### Added - Added a new `Strip` normalizer ### Fixed - [#145]: Decoding was buggy on `BertWordPieceTokenizer`. - [#152]: Some documentation and examples were still using the old `BPETokenizer` ### How to migrate - Use `lowercase` when initializing `ByteLevelBPETokenizer` or `CharBPETokenizer` instead of `do_lowercase`. ## [0.4.2] ### Fixed - [#137]: Fix a bug in the class `WordPieceTrainer` that prevented `BertWordPieceTokenizer` from being trained. ## [0.4.1] ### Fixed - [#134]: Fix a bug related to the punctuation in BertWordPieceTokenizer ## [0.4.0] ### Changed - [#131]: Replaced all .new() class methods by a proper __new__ implementation - Improved typings ### How to migrate - Remove all `.new` on all classe instanciations ## [0.3.0] ### Changed - BPETokenizer has been renamed to CharBPETokenizer for clarity. - Improve truncation/padding and the handling of overflowing tokens. Now when a sequence gets truncated, we provide a list of overflowing `Encoding` that are ready to be processed by a language model, just as the main `Encoding`. - Provide mapping to the original string offsets using: ``` output = tokenizer.encode(...) print(output.original_str.offsets(output.offsets[3])) ``` - [#99]: Exposed the vocabulary size on all tokenizers ### Added - Added `CharDelimiterSplit`: a new `PreTokenizer` that allows splitting sequences on the given delimiter (Works like `.split(delimiter)`) - Added `WordLevel`: a new model that simply maps `tokens` to their `ids`. ### Fixed - Fix a bug with IndexableString - Fix a bug with truncation ### How to migrate - Rename `BPETokenizer` to `CharBPETokenizer` - `Encoding.overflowing` is now a List instead of a `Optional[Encoding]` ## [0.2.1] ### Fixed - Fix a bug with the IDs associated with added tokens. - Fix a bug that was causing crashes in Python 3.5 [#1096]: https://github.com/huggingface/tokenizers/pull/1096 [#1072]: https://github.com/huggingface/tokenizers/pull/1072 [#956]: https://github.com/huggingface/tokenizers/pull/956 [#1008]: https://github.com/huggingface/tokenizers/pull/1008 [#1009]: https://github.com/huggingface/tokenizers/pull/1009 [#1047]: https://github.com/huggingface/tokenizers/pull/1047 [#1055]: https://github.com/huggingface/tokenizers/pull/1055 [#1051]: https://github.com/huggingface/tokenizers/pull/1051 [#1052]: https://github.com/huggingface/tokenizers/pull/1052 [#938]: https://github.com/huggingface/tokenizers/pull/938 [#939]: https://github.com/huggingface/tokenizers/pull/939 [#952]: https://github.com/huggingface/tokenizers/pull/952 [#954]: https://github.com/huggingface/tokenizers/pull/954 [#962]: https://github.com/huggingface/tokenizers/pull/962 [#961]: https://github.com/huggingface/tokenizers/pull/961 [#960]: https://github.com/huggingface/tokenizers/pull/960 [#919]: https://github.com/huggingface/tokenizers/pull/919 [#916]: https://github.com/huggingface/tokenizers/pull/916 [#895]: https://github.com/huggingface/tokenizers/pull/895 [#884]: https://github.com/huggingface/tokenizers/pull/884 [#882]: https://github.com/huggingface/tokenizers/pull/882 [#868]: https://github.com/huggingface/tokenizers/pull/868 [#860]: https://github.com/huggingface/tokenizers/pull/860 [#850]: https://github.com/huggingface/tokenizers/pull/850 [#844]: https://github.com/huggingface/tokenizers/pull/844 [#845]: https://github.com/huggingface/tokenizers/pull/845 [#851]: https://github.com/huggingface/tokenizers/pull/851 [#585]: https://github.com/huggingface/tokenizers/pull/585 [#793]: https://github.com/huggingface/tokenizers/pull/793 [#780]: https://github.com/huggingface/tokenizers/pull/780 [#770]: https://github.com/huggingface/tokenizers/pull/770 [#762]: https://github.com/huggingface/tokenizers/pull/762 [#718]: https://github.com/huggingface/tokenizers/pull/718 [#714]: https://github.com/huggingface/tokenizers/pull/714 [#707]: https://github.com/huggingface/tokenizers/pull/707 [#693]: https://github.com/huggingface/tokenizers/pull/693 [#686]: https://github.com/huggingface/tokenizers/pull/686 [#674]: https://github.com/huggingface/tokenizers/pull/674 [#657]: https://github.com/huggingface/tokenizers/pull/657 [#656]: https://github.com/huggingface/tokenizers/pull/656 [#652]: https://github.com/huggingface/tokenizers/pull/652 [#621]: https://github.com/huggingface/tokenizers/pull/621 [#620]: https://github.com/huggingface/tokenizers/pull/620 [#618]: https://github.com/huggingface/tokenizers/pull/618 [#617]: https://github.com/huggingface/tokenizers/pull/617 [#616]: https://github.com/huggingface/tokenizers/pull/616 [#590]: https://github.com/huggingface/tokenizers/pull/590 [#574]: https://github.com/huggingface/tokenizers/pull/574 [#544]: https://github.com/huggingface/tokenizers/pull/544 [#542]: https://github.com/huggingface/tokenizers/pull/542 [#539]: https://github.com/huggingface/tokenizers/pull/539 [#538]: https://github.com/huggingface/tokenizers/pull/538 [#533]: https://github.com/huggingface/tokenizers/pull/533 [#530]: https://github.com/huggingface/tokenizers/pull/530 [#519]: https://github.com/huggingface/tokenizers/pull/519 [#509]: https://github.com/huggingface/tokenizers/pull/509 [#508]: https://github.com/huggingface/tokenizers/pull/508 [#506]: https://github.com/huggingface/tokenizers/pull/506 [#500]: https://github.com/huggingface/tokenizers/pull/500 [#498]: https://github.com/huggingface/tokenizers/pull/498 [#492]: https://github.com/huggingface/tokenizers/pull/492 [#481]: https://github.com/huggingface/tokenizers/pull/481 [#480]: https://github.com/huggingface/tokenizers/pull/480 [#477]: https://github.com/huggingface/tokenizers/pull/477 [#476]: https://github.com/huggingface/tokenizers/pull/476 [#470]: https://github.com/huggingface/tokenizers/pull/470 [#464]: https://github.com/huggingface/tokenizers/pull/464 [#459]: https://github.com/huggingface/tokenizers/pull/459 [#420]: https://github.com/huggingface/tokenizers/pull/420 [#417]: https://github.com/huggingface/tokenizers/pull/417 [#416]: https://github.com/huggingface/tokenizers/pull/416 [#403]: https://github.com/huggingface/tokenizers/pull/403 [#394]: https://github.com/huggingface/tokenizers/pull/394 [#389]: https://github.com/huggingface/tokenizers/pull/389 [#379]: https://github.com/huggingface/tokenizers/pull/379 [#378]: https://github.com/huggingface/tokenizers/pull/378 [#363]: https://github.com/huggingface/tokenizers/pull/363 [#362]: https://github.com/huggingface/tokenizers/pull/362 [#360]: https://github.com/huggingface/tokenizers/pull/360 [#355]: https://github.com/huggingface/tokenizers/pull/355 [#333]: https://github.com/huggingface/tokenizers/pull/333 [#330]: https://github.com/huggingface/tokenizers/pull/330 [#329]: https://github.com/huggingface/tokenizers/pull/329 [#311]: https://github.com/huggingface/tokenizers/pull/311 [#309]: https://github.com/huggingface/tokenizers/pull/309 [#292]: https://github.com/huggingface/tokenizers/pull/292 [#289]: https://github.com/huggingface/tokenizers/pull/289 [#286]: https://github.com/huggingface/tokenizers/pull/286 [#280]: https://github.com/huggingface/tokenizers/pull/280 [#276]: https://github.com/huggingface/tokenizers/pull/276 [#273]: https://github.com/huggingface/tokenizers/pull/273 [#272]: https://github.com/huggingface/tokenizers/pull/272 [#249]: https://github.com/huggingface/tokenizers/pull/249 [#239]: https://github.com/huggingface/tokenizers/pull/239 [#236]: https://github.com/huggingface/tokenizers/pull/236 [#234]: https://github.com/huggingface/tokenizers/pull/234 [#208]: https://github.com/huggingface/tokenizers/pull/208 [#205]: https://github.com/huggingface/tokenizers/issues/205 [#197]: https://github.com/huggingface/tokenizers/pull/197 [#193]: https://github.com/huggingface/tokenizers/pull/193 [#190]: https://github.com/huggingface/tokenizers/pull/190 [#188]: https://github.com/huggingface/tokenizers/pull/188 [#187]: https://github.com/huggingface/tokenizers/issues/187 [#175]: https://github.com/huggingface/tokenizers/issues/175 [#174]: https://github.com/huggingface/tokenizers/issues/174 [#165]: https://github.com/huggingface/tokenizers/pull/165 [#163]: https://github.com/huggingface/tokenizers/issues/163 [#160]: https://github.com/huggingface/tokenizers/issues/160 [#156]: https://github.com/huggingface/tokenizers/pull/156 [#152]: https://github.com/huggingface/tokenizers/issues/152 [#149]: https://github.com/huggingface/tokenizers/issues/149 [#145]: https://github.com/huggingface/tokenizers/issues/145 [#139]: https://github.com/huggingface/tokenizers/issues/139 [#137]: https://github.com/huggingface/tokenizers/issues/137 [#134]: https://github.com/huggingface/tokenizers/issues/134 [#131]: https://github.com/huggingface/tokenizers/issues/131 [#99]: https://github.com/huggingface/tokenizers/pull/99
huggingface/tokenizers/blob/main/bindings/python/CHANGELOG.md
Introduction [[introduction]] <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit9/thumbnail.png" alt="Unit 8"/> In Unit 6, we learned about Advantage Actor Critic (A2C), a hybrid architecture combining value-based and policy-based methods that helps to stabilize the training by reducing the variance with: - *An Actor* that controls **how our agent behaves** (policy-based method). - *A Critic* that measures **how good the action taken is** (value-based method). Today we'll learn about Proximal Policy Optimization (PPO), an architecture that **improves our agent's training stability by avoiding policy updates that are too large**. To do that, we use a ratio that indicates the difference between our current and old policy and clip this ratio to a specific range \\( [1 - \epsilon, 1 + \epsilon] \\) . Doing this will ensure **that our policy update will not be too large and that the training is more stable.** This Unit is in two parts: - In this first part, you'll learn the theory behind PPO and code your PPO agent from scratch using the [CleanRL](https://github.com/vwxyzjn/cleanrl) implementation. To test its robustness you'll use LunarLander-v2. LunarLander-v2 **is the first environment you used when you started this course**. At that time, you didn't know how PPO worked, and now, **you can code it from scratch and train it. How incredible is that 🤩**. - In the second part, we'll get deeper into PPO optimization by using [Sample-Factory](https://samplefactory.dev/) and train an agent playing vizdoom (an open source version of Doom). <figure> <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit10/environments.png" alt="Environment"/> <figcaption>These are the environments you're going to use to train your agents: VizDoom environments</figcaption> </figure> Sound exciting? Let's get started! 🚀
huggingface/deep-rl-class/blob/main/units/en/unit8/introduction.mdx
!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # Text-to-image The Stable Diffusion model was created by researchers and engineers from [CompVis](https://github.com/CompVis), [Stability AI](https://stability.ai/), [Runway](https://github.com/runwayml), and [LAION](https://laion.ai/). The [`StableDiffusionPipeline`] is capable of generating photorealistic images given any text input. It's trained on 512x512 images from a subset of the LAION-5B dataset. This model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts. With its 860M UNet and 123M text encoder, the model is relatively lightweight and can run on consumer GPUs. Latent diffusion is the research on top of which Stable Diffusion was built. It was proposed in [High-Resolution Image Synthesis with Latent Diffusion Models](https://huggingface.co/papers/2112.10752) by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer. The abstract from the paper is: *By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs. Code is available at https://github.com/CompVis/latent-diffusion.* <Tip> Make sure to check out the Stable Diffusion [Tips](overview#tips) section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! If you're interested in using one of the official checkpoints for a task, explore the [CompVis](https://huggingface.co/CompVis), [Runway](https://huggingface.co/runwayml), and [Stability AI](https://huggingface.co/stabilityai) Hub organizations! </Tip> ## StableDiffusionPipeline [[autodoc]] StableDiffusionPipeline - all - __call__ - enable_attention_slicing - disable_attention_slicing - enable_vae_slicing - disable_vae_slicing - enable_xformers_memory_efficient_attention - disable_xformers_memory_efficient_attention - enable_vae_tiling - disable_vae_tiling - load_textual_inversion - from_single_file - load_lora_weights - save_lora_weights ## StableDiffusionPipelineOutput [[autodoc]] pipelines.stable_diffusion.StableDiffusionPipelineOutput ## FlaxStableDiffusionPipeline [[autodoc]] FlaxStableDiffusionPipeline - all - __call__ ## FlaxStableDiffusionPipelineOutput [[autodoc]] pipelines.stable_diffusion.FlaxStableDiffusionPipelineOutput
huggingface/diffusers/blob/main/docs/source/en/api/pipelines/stable_diffusion/text2img.md
!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # MBart and MBart-50 <div class="flex flex-wrap space-x-1"> <a href="https://huggingface.co/models?filter=mbart"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-mbart-blueviolet"> </a> <a href="https://huggingface.co/spaces/docs-demos/mbart-large-50-one-to-many-mmt"> <img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"> </a> </div> ## Overview of MBart The MBart model was presented in [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer. According to the abstract, MBART is a sequence-to-sequence denoising auto-encoder pretrained on large-scale monolingual corpora in many languages using the BART objective. mBART is one of the first methods for pretraining a complete sequence-to-sequence model by denoising full texts in multiple languages, while previous approaches have focused only on the encoder, decoder, or reconstructing parts of the text. This model was contributed by [valhalla](https://huggingface.co/valhalla). The Authors' code can be found [here](https://github.com/pytorch/fairseq/tree/master/examples/mbart) ### Training of MBart MBart is a multilingual encoder-decoder (sequence-to-sequence) model primarily intended for translation task. As the model is multilingual it expects the sequences in a different format. A special language id token is added in both the source and target text. The source text format is `X [eos, src_lang_code]` where `X` is the source text. The target text format is `[tgt_lang_code] X [eos]`. `bos` is never used. The regular [`~MBartTokenizer.__call__`] will encode source text format passed as first argument or with the `text` keyword, and target text format passed with the `text_label` keyword argument. - Supervised training ```python >>> from transformers import MBartForConditionalGeneration, MBartTokenizer >>> tokenizer = MBartTokenizer.from_pretrained("facebook/mbart-large-en-ro", src_lang="en_XX", tgt_lang="ro_RO") >>> example_english_phrase = "UN Chief Says There Is No Military Solution in Syria" >>> expected_translation_romanian = "Şeful ONU declară că nu există o soluţie militară în Siria" >>> inputs = tokenizer(example_english_phrase, text_target=expected_translation_romanian, return_tensors="pt") >>> model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-en-ro") >>> # forward pass >>> model(**inputs) ``` - Generation While generating the target text set the `decoder_start_token_id` to the target language id. The following example shows how to translate English to Romanian using the *facebook/mbart-large-en-ro* model. ```python >>> from transformers import MBartForConditionalGeneration, MBartTokenizer >>> tokenizer = MBartTokenizer.from_pretrained("facebook/mbart-large-en-ro", src_lang="en_XX") >>> article = "UN Chief Says There Is No Military Solution in Syria" >>> inputs = tokenizer(article, return_tensors="pt") >>> translated_tokens = model.generate(**inputs, decoder_start_token_id=tokenizer.lang_code_to_id["ro_RO"]) >>> tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0] "Şeful ONU declară că nu există o soluţie militară în Siria" ``` ## Overview of MBart-50 MBart-50 was introduced in the [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) paper by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan. MBart-50 is created using the original *mbart-large-cc25* checkpoint by extendeding its embedding layers with randomly initialized vectors for an extra set of 25 language tokens and then pretrained on 50 languages. According to the abstract *Multilingual translation models can be created through multilingual finetuning. Instead of finetuning on one direction, a pretrained model is finetuned on many directions at the same time. It demonstrates that pretrained models can be extended to incorporate additional languages without loss of performance. Multilingual finetuning improves on average 1 BLEU over the strongest baselines (being either multilingual from scratch or bilingual finetuning) while improving 9.3 BLEU on average over bilingual baselines from scratch.* ### Training of MBart-50 The text format for MBart-50 is slightly different from mBART. For MBart-50 the language id token is used as a prefix for both source and target text i.e the text format is `[lang_code] X [eos]`, where `lang_code` is source language id for source text and target language id for target text, with `X` being the source or target text respectively. MBart-50 has its own tokenizer [`MBart50Tokenizer`]. - Supervised training ```python from transformers import MBartForConditionalGeneration, MBart50TokenizerFast model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50") tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50", src_lang="en_XX", tgt_lang="ro_RO") src_text = " UN Chief Says There Is No Military Solution in Syria" tgt_text = "Şeful ONU declară că nu există o soluţie militară în Siria" model_inputs = tokenizer(src_text, text_target=tgt_text, return_tensors="pt") model(**model_inputs) # forward pass ``` - Generation To generate using the mBART-50 multilingual translation models, `eos_token_id` is used as the `decoder_start_token_id` and the target language id is forced as the first generated token. To force the target language id as the first generated token, pass the *forced_bos_token_id* parameter to the *generate* method. The following example shows how to translate between Hindi to French and Arabic to English using the *facebook/mbart-50-large-many-to-many* checkpoint. ```python from transformers import MBartForConditionalGeneration, MBart50TokenizerFast article_hi = "संयुक्त राष्ट्र के प्रमुख का कहना है कि सीरिया में कोई सैन्य समाधान नहीं है" article_ar = "الأمين العام للأمم المتحدة يقول إنه لا يوجد حل عسكري في سوريا." model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-many-to-many-mmt") tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-many-to-many-mmt") # translate Hindi to French tokenizer.src_lang = "hi_IN" encoded_hi = tokenizer(article_hi, return_tensors="pt") generated_tokens = model.generate(**encoded_hi, forced_bos_token_id=tokenizer.lang_code_to_id["fr_XX"]) tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) # => "Le chef de l 'ONU affirme qu 'il n 'y a pas de solution militaire en Syria." # translate Arabic to English tokenizer.src_lang = "ar_AR" encoded_ar = tokenizer(article_ar, return_tensors="pt") generated_tokens = model.generate(**encoded_ar, forced_bos_token_id=tokenizer.lang_code_to_id["en_XX"]) tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) # => "The Secretary-General of the United Nations says there is no military solution in Syria." ``` ## Documentation resources - [Text classification task guide](../tasks/sequence_classification) - [Question answering task guide](../tasks/question_answering) - [Causal language modeling task guide](../tasks/language_modeling) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Translation task guide](../tasks/translation) - [Summarization task guide](../tasks/summarization) ## MBartConfig [[autodoc]] MBartConfig ## MBartTokenizer [[autodoc]] MBartTokenizer - build_inputs_with_special_tokens ## MBartTokenizerFast [[autodoc]] MBartTokenizerFast ## MBart50Tokenizer [[autodoc]] MBart50Tokenizer ## MBart50TokenizerFast [[autodoc]] MBart50TokenizerFast <frameworkcontent> <pt> ## MBartModel [[autodoc]] MBartModel ## MBartForConditionalGeneration [[autodoc]] MBartForConditionalGeneration ## MBartForQuestionAnswering [[autodoc]] MBartForQuestionAnswering ## MBartForSequenceClassification [[autodoc]] MBartForSequenceClassification ## MBartForCausalLM [[autodoc]] MBartForCausalLM - forward </pt> <tf> ## TFMBartModel [[autodoc]] TFMBartModel - call ## TFMBartForConditionalGeneration [[autodoc]] TFMBartForConditionalGeneration - call </tf> <jax> ## FlaxMBartModel [[autodoc]] FlaxMBartModel - __call__ - encode - decode ## FlaxMBartForConditionalGeneration [[autodoc]] FlaxMBartForConditionalGeneration - __call__ - encode - decode ## FlaxMBartForSequenceClassification [[autodoc]] FlaxMBartForSequenceClassification - __call__ - encode - decode ## FlaxMBartForQuestionAnswering [[autodoc]] FlaxMBartForQuestionAnswering - __call__ - encode - decode </jax> </frameworkcontent>
huggingface/transformers/blob/main/docs/source/en/model_doc/mbart.md
Quiz The best way to learn and [to avoid the illusion of competence](https://www.coursera.org/lecture/learning-how-to-learn/illusions-of-competence-BuFzf) **is to test yourself.** This will help you to find **where you need to reinforce your knowledge**. ### Q1: Which of the following tools are specifically designed for video games development? <Question choices={[ { text: "Unity (C#)", explain: "", correct: true, }, { text: "Unreal Engine (C++)", explain: "", correct: true, }, { text: "Godot (GDScript, C++, C#)", explain: "", correct: true, }, { text: "JetBrains' Rider", explain: "Although useful for its support of C# for Unity, it's not a video games development IDE", correct: false, }, { text: "JetBrains' CLion", explain: "Although useful for its support of C++ for Unreal Engine, it's not a video games development IDE", correct: false, }, { text: "Microsoft Visual Studio and Visual Studio Code", explain: "Including support for both Unity and Unreal, they are generic IDEs, not video games oriented.", correct: false, }, ]} /> ### Q2: What of the following statements are true about Unity ML-Agents? <Question choices={[ { text: "Unity ´Scene´ objects can be used to create learning environments", explain: "", correct: true, }, { text: "Unit ML-Agents allows you to create and train your agents using Reinforcement Learning", explain: "", correct: true, }, { text: "Its `Communicator` component manages the communication between Unity's C# Environments/Agents and a Python back-end", explain: "", correct: true, }, { text: "The training process uses Reinforcement Learning algorithms, implemented in Pytorch", explain: "", correct: true, }, { text: "Unity ML-Agents only support Proximal Policy Optimization (PPO)", explain: "No, Unity ML-Agents supports several families of algorithms, including Actor-Critic which is going to be explained in the next section", correct: false, }, { text: "It includes a Gym Wrapper and a multi-agent version of it called `PettingZoo`", explain: "", correct: true, }, ]} /> ### Q3: Fill the missing letters - In Unity ML-Agents, the Policy of an Agent is called a b _ _ _ n - The component in charge of orchestrating the agents is called the _ c _ _ _ m _ <details> <summary>Solution</summary> - b r a i n - a c a d e m y </details> ### Q4: Define with your own words what is a `raycast` <details> <summary>Solution</summary> A raycast is (most of the times) a linear projection, as a `laser` which aims to detect collisions through objects. </details> ### Q5: Which are the differences between capturing the environment using `frames` or `raycasts`? <Question choices={[ { text: "By using `frames`, the environment is defined by each of the pixels of the screen. By using `raycasts`, we only send a sample of those pixels.", explain: "`Raycasts` don't have anything to do with pixels. They are linear projections (lasers) that we spawn to look for collisions.", correct: false, }, { text: "By using `raycasts`, the environment is defined by each of the pixels of the screen. By using `frames`, we spawn a (usually) line to check what objects it collides with", explain: "It's the other way around - `frames` collect pixels, `raycasts` check for collisions.", correct: false, }, { text: "By using `frames`, we collect all the pixels of the screen, which define the environment. By using `raycast`, we don't use pixels, we spawn (normally) lines and check their collisions", explain: "", correct: true, }, ]} /> ### Q6: Name several environment and agent input variables used to train the agent in the Snowball or Pyramid environments <details> <summary>Solution</summary> - Collisions of the raycasts spawned from the agent detecting blocks, (invisible) walls, stones, our target, switches, etc. - Traditional inputs describing agent features, as its speed - Boolean vars, as the switch (on/off) in Pyramids or the `can I shoot?` in the SnowballTarget. </details> Congrats on finishing this Quiz 🥳, if you missed some elements, take time to read the chapter again to reinforce (😏) your knowledge.
huggingface/deep-rl-class/blob/main/units/en/unit5/quiz.mdx
!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # Using Sample Factory for high throughput training Under construction 🚧.
huggingface/simulate/blob/main/docs/source/howto/sample_factory.mdx
!--⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Overview This section contains an exhaustive and technical description of `huggingface_hub` classes and methods.
huggingface/huggingface_hub/blob/main/docs/source/en/package_reference/overview.md
使用Gradio Python客户端构建FastAPI应用 Tags: CLIENT, API, WEB APP 在本博客文章中,我们将演示如何使用 `gradio_client` [Python库](getting-started-with-the-python-client/) 来以编程方式创建Gradio应用的请求,通过创建一个示例FastAPI Web应用。我们将构建的 Web 应用名为“Acappellify”,它允许用户上传视频文件作为输入,并返回一个没有伴奏音乐的视频版本。它还会显示生成的视频库。 **先决条件** 在开始之前,请确保您正在运行Python 3.9或更高版本,并已安装以下库: - `gradio_client` - `fastapi` - `uvicorn` 您可以使用`pip`安装这些库: ```bash $ pip install gradio_client fastapi uvicorn ``` 您还需要安装ffmpeg。您可以通过在终端中运行以下命令来检查您是否已安装ffmpeg: ```bash $ ffmpeg version ``` 否则,通过按照这些说明安装ffmpeg [链接](https://www.hostinger.com/tutorials/how-to-install-ffmpeg)。 ## 步骤1:编写视频处理函数 让我们从似乎最复杂的部分开始--使用机器学习从视频中去除音乐。 幸运的是,我们有一个现有的Space可以简化这个过程:[https://huggingface.co/spaces/abidlabs/music-separation](https://huggingface.co/spaces/abidlabs/music-separation)。该空间接受一个音频文件,并生成两个独立的音频文件:一个带有伴奏音乐,一个带有原始剪辑中的其他所有声音。非常适合我们的客户端使用! 打开一个新的Python文件,比如`main.py`,并通过从`gradio_client`导入 `Client` 类,并将其连接到该Space: ```py from gradio_client import Client client = Client("abidlabs/music-separation") def acapellify(audio_path): result = client.predict(audio_path, api_name="/predict") return result[0] ``` 所需的代码仅如上所示--请注意,API端点返回一个包含两个音频文件(一个没有音乐,一个只有音乐)的列表,因此我们只返回列表的第一个元素。 --- **注意**:由于这是一个公共Space,可能会有其他用户同时使用该Space,这可能导致速度较慢。您可以使用自己的[Hugging Face token](https://huggingface.co/settings/tokens)复制此Space,创建一个只有您自己访问权限的私有Space,并绕过排队。要做到这一点,只需用下面的代码替换上面的前两行: ```py from gradio_client import Client client = Client.duplicate("abidlabs/music-separation", hf_token=YOUR_HF_TOKEN) ``` 其他的代码保持不变! --- 现在,当然,我们正在处理视频文件,所以我们首先需要从视频文件中提取音频。为此,我们将使用`ffmpeg`库,它在处理音频和视频文件时做了很多艰巨的工作。使用`ffmpeg`的最常见方法是通过命令行,在Python的`subprocess`模块中调用它: 我们的视频处理工作流包含三个步骤: 1. 首先,我们从视频文件路径开始,并使用`ffmpeg`提取音频。 2. 然后,我们通过上面的`acapellify()`函数传入音频文件。 3. 最后,我们将新音频与原始视频合并,生成最终的Acapellify视频。 以下是Python中的完整代码,您可以将其添加到`main.py`文件中: ```python import subprocess def process_video(video_path): old_audio = os.path.basename(video_path).split(".")[0] + ".m4a" subprocess.run(['ffmpeg', '-y', '-i', video_path, '-vn', '-acodec', 'copy', old_audio]) new_audio = acapellify(old_audio) new_video = f"acap_{video_path}" subprocess.call(['ffmpeg', '-y', '-i', video_path, '-i', new_audio, '-map', '0:v', '-map', '1:a', '-c:v', 'copy', '-c:a', 'aac', '-strict', 'experimental', f"static/{new_video}"]) return new_video ``` 如果您想了解所有命令行参数的详细信息,请阅读[ffmpeg文档](https://ffmpeg.org/ffmpeg.html),因为它们超出了本教程的范围。 ## 步骤2: 创建一个FastAPI应用(后端路由) 接下来,我们将创建一个简单的FastAPI应用程序。如果您以前没有使用过FastAPI,请查看[优秀的FastAPI文档](https://fastapi.tiangolo.com/)。否则,下面的基本模板将看起来很熟悉,我们将其添加到`main.py`中: ```python import os from fastapi import FastAPI, File, UploadFile, Request from fastapi.responses import HTMLResponse, RedirectResponse from fastapi.staticfiles import StaticFiles from fastapi.templating import Jinja2Templates app = FastAPI() os.makedirs("static", exist_ok=True) app.mount("/static", StaticFiles(directory="static"), name="static") templates = Jinja2Templates(directory="templates") videos = [] @app.get("/", response_class=HTMLResponse) async def home(request: Request): return templates.TemplateResponse( "home.html", {"request": request, "videos": videos}) @app.post("/uploadvideo/") async def upload_video(video: UploadFile = File(...)): new_video = process_video(video.filename) videos.append(new_video) return RedirectResponse(url='/', status_code=303) ``` 在这个示例中,FastAPI应用程序有两个路由:`/` 和 `/uploadvideo/`。 `/` 路由返回一个显示所有上传视频的画廊的HTML模板。 `/uploadvideo/` 路由接受一个带有`UploadFile`对象的 `POST` 请求,表示上传的视频文件。视频文件通过`process_video()`方法进行 "acapellify",并将输出视频存储在一个列表中,该列表在内存中存储了所有上传的视频。 请注意,这只是一个非常基本的示例,如果这是一个发布应用程序,则需要添加更多逻辑来处理文件存储、用户身份验证和安全性考虑等。 ## 步骤3:创建一个FastAPI应用(前端模板) 最后,我们创建Web应用的前端。首先,在与`main.py`相同的目录下创建一个名为`templates`的文件夹。然后,在`templates`文件夹中创建一个名为`home.html`的模板。下面是最终的文件结构: ```csv ├── main.py ├── templates │ └── home.html ``` 将以下内容写入`home.html`文件中: ```html &lt;!DOCTYPE html> &lt;html> &lt;head> &lt;title> 视频库 &lt;/title> &lt;style> body { font-family: sans-serif; margin: 0; padding: 0; background-color: #f5f5f5; } h1 { text-align: center; margin-top: 30px; margin-bottom: 20px; } .gallery { display: flex; flex-wrap: wrap; justify-content: center; gap: 20px; padding: 20px; } .video { border: 2px solid #ccc; box-shadow: 0px 0px 10px rgba(0, 0, 0, 0.2); border-radius: 5px; overflow: hidden; width: 300px; margin-bottom: 20px; } .video video { width: 100%; height: 200px; } .video p { text-align: center; margin: 10px 0; } form { margin-top: 20px; text-align: center; } input[type="file"] { display: none; } .upload-btn { display: inline-block; background-color: #3498db; color: #fff; padding: 10px 20px; font-size: 16px; border: none; border-radius: 5px; cursor: pointer; } .upload-btn:hover { background-color: #2980b9; } .file-name { margin-left: 10px; } &lt;/style> &lt;/head> &lt;body> &lt;h1> 视频库 &lt;/h1> {% if videos %} &lt;div class="gallery"> {% for video in videos %} &lt;div class="video"> &lt;video controls> &lt;source src="{{ url_for('static', path=video) }}" type="video/mp4"> 您的浏览器不支持视频标签。 &lt;/video> &lt;p>{{ video }}&lt;/p> &lt;/div> {% endfor %} &lt;/div> {% else %} &lt;p> 尚未上传任何视频。&lt;/p> {% endif %} &lt;form action="/uploadvideo/" method="post" enctype="multipart/form-data"> &lt;label for="video-upload" class="upload-btn"> 选择视频文件 &lt;/label> &lt;input type="file" name="video" id="video-upload"> &lt;span class="file-name">&lt;/span> &lt;button type="submit" class="upload-btn"> 上传 &lt;/button> &lt;/form> &lt;script> // 在表单中显示所选文件名 const fileUpload = document.getElementById("video-upload"); const fileName = document.querySelector(".file-name"); fileUpload.addEventListener("change", (e) => { fileName.textContent = e.target.files[0].name; }); &lt;/script> &lt;/body> &lt;/html> ``` ## 第4步:运行 FastAPI 应用 最后,我们准备好运行由 Gradio Python 客户端提供支持的 FastAPI 应用程序。 打开终端并导航到包含 `main.py` 文件的目录,然后在终端中运行以下命令: ```bash $ uvicorn main:app ``` 您应该会看到如下输出: ```csv Loaded as API: https://abidlabs-music-separation.hf.space ✔ INFO: Started server process [1360] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit) ``` 就是这样!开始上传视频,您将在响应中得到一些“acapellified”视频(处理时间根据您的视频长度可能需要几秒钟到几分钟)。以下是上传两个视频后 UI 的外观: ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/acapellify.png) 如果您想了解如何在项目中使用 Gradio Python 客户端的更多信息,请[阅读专门的指南](/getting-started-with-the-python-client/)。
gradio-app/gradio/blob/main/guides/cn/06_client-libraries/fastapi-app-with-the-gradio-client.md
n our other videos we talked about the basics of fine-tuning a language model with Tensorflow (and as always, when I refer to videos I'll link them below). Still, can we do better? So here's the code from our model fine-tuning video, and while it works, we could definitely tweak a couple of things. By far the most important thing is the learning rate. In this video we'll talk about how to change it, which will make your training much more consistently successful. In fact, there are two things we want to change about the default learning rate for Adam. The first is that it's way too high for our models - by default Adam uses a learning rate of 10^-3 1 e minus 3, which is very high for training Transformers. We're going to start at 5 by 10^-5 5 e minus 5, which is 20 times lower than the default. And secondly, we don't just want a constant learning rate - we can get even better performance if we 'decay' the learning rate down to a tiny value, or even 0, over the course of training. That's what this PolynomialDecay schedule thing is doing. That name might be intimidating, especially if you only vaguely remember what a polynomial is from maths class. However, all we need to do is tell it how long training is going to be, so it decays at the right speed - that's what this code here is doing. We're computing how many minibatches the model is going to see over its entire training run, which is the size of the training set, divided by the batch_size to get the number of batches per epoch, and then multiplied by the number of epochs to get the total number of batches across the whole training run. Once we know how many training steps we're taking, we just pass all that information to the scheduler and we're ready to go. What does the polynomial decay schedule look like? With default options, it's actually just a linear schedule, so it looks like this - it starts at 5e-5, which means 5 times ten to the minus 5, and then decays down at a constant rate until it hits zero right at the very end of training. So why do they call it polynomial and not linear? Because if you tweak the options, you can get a higher-order decay schedule, but there's no need to do that right now. Now, how do we use our learning rate schedule? Easy, we just pass it to Adam! You'll notice the first time when we compiled the model, we just passed it the string "adam". Keras recognizes the names of common optimizers and loss functions if you pass them as strings, so it saves time to do that if you only want the default settings. But we're professional machine learners now, with our very own learning rate schedule, so we have to do things properly. So first we import the optimizer, then we initialize it with our scheduler, and then we compile the model using the new optimizer, and whatever loss function you want - this will be sparse categorical crossentropy if you're following along from the fine-tuning video. And now we have a high-performance model, ready to go. All that remains is to fit the model just like we did before! Remember, because we compiled the model with the new optimizer with the new learning rate schedule, we don't need to change anything here. We just call fit again, with exactly the same command as before, but now we get beautiful training with a nice, smooth learning rate decay.
huggingface/course/blob/main/subtitles/en/raw/chapter3/03d_keras-learning-rate.md
Gradio Demo: theme_extended_step_3 ``` !pip install -q gradio ``` ``` import gradio as gr import time with gr.Blocks( theme=gr.themes.Default( font=[gr.themes.GoogleFont("Inconsolata"), "Arial", "sans-serif"] ) ) as demo: textbox = gr.Textbox(label="Name") slider = gr.Slider(label="Count", minimum=0, maximum=100, step=1) with gr.Row(): button = gr.Button("Submit", variant="primary") clear = gr.Button("Clear") output = gr.Textbox(label="Output") def repeat(name, count): time.sleep(3) return name * count button.click(repeat, [textbox, slider], output) if __name__ == "__main__": demo.launch() ```
gradio-app/gradio/blob/main/demo/theme_extended_step_3/run.ipynb
-- title: "Fast Inference on Large Language Models: BLOOMZ on Habana Gaudi2 Accelerator" thumbnail: /blog/assets/habana-gaudi-2-bloom/thumbnail.png authors: - user: regisss --- # Fast Inference on Large Language Models: BLOOMZ on Habana Gaudi2 Accelerator This article will show you how to easily deploy large language models with hundreds of billions of parameters like BLOOM on [Habana® Gaudi®2](https://habana.ai/training/gaudi2/) using 🤗 [Optimum Habana](https://huggingface.co/docs/optimum/habana/index), which is the bridge between Gaudi2 and the 🤗 Transformers library. As demonstrated in the benchmark presented in this post, this will enable you to **run inference faster than with any GPU currently available on the market**. As models get bigger and bigger, deploying them into production to run inference has become increasingly challenging. Both hardware and software have seen a lot of innovations to address these challenges, so let's dive in to see how to efficiently overcome them! ## BLOOMZ [BLOOM](https://arxiv.org/abs/2211.05100) is a 176-billion-parameter autoregressive model that was trained to complete sequences of text. It can handle 46 different languages and 13 programming languages. Designed and trained as part of the [BigScience](https://bigscience.huggingface.co/) initiative, BLOOM is an open-science project that involved a large number of researchers and engineers all over the world. More recently, another model with the exact same architecture was released: [BLOOMZ](https://arxiv.org/abs/2211.01786), which is a fine-tuned version of BLOOM on several tasks leading to better generalization and zero-shot[^1] capabilities. Such large models raise new challenges in terms of memory and speed for both [training](https://huggingface.co/blog/bloom-megatron-deepspeed) and [inference](https://huggingface.co/blog/bloom-inference-optimization). Even in 16-bit precision, one instance requires 352 GB to fit! You will probably struggle to find any device with so much memory at the moment, but state-of-the-art hardware like Habana Gaudi2 does make it possible to perform inference on BLOOM and BLOOMZ models with low latencies. ## Habana Gaudi2 [Gaudi2](https://habana.ai/training/gaudi2/) is the second-generation AI hardware accelerator designed by Habana Labs. A single server contains 8 accelerator devices (called Habana Processing Units, or HPUs) with 96GB of memory each, which provides room to make very large models fit in. However, hosting the model is not very interesting if the computation is slow. Fortunately, Gaudi2 shines on that aspect: it differs from GPUs in that its architecture enables the accelerator to perform General Matrix Multiplication (GeMM) and other operations in parallel, which speeds up deep learning workflows. These features make Gaudi2 a great candidate for LLM training and inference. Habana's SDK, SynapseAI™, supports PyTorch and DeepSpeed for accelerating LLM training and inference. The [SynapseAI graph compiler](https://docs.habana.ai/en/latest/Gaudi_Overview/SynapseAI_Software_Suite.html#graph-compiler-and-runtime) will optimize the execution of the operations accumulated in the graph (e.g. operator fusion, data layout management, parallelization, pipelining and memory management, and graph-level optimizations). Moreover, support for [HPU graphs](https://docs.habana.ai/en/latest/PyTorch/Inference_on_PyTorch/Inference_Using_HPU_Graphs.html) and [DeepSpeed-inference](https://docs.habana.ai/en/latest/PyTorch/DeepSpeed/Inference_Using_DeepSpeed.html) have just recently been introduced in SynapseAI, and these are well-suited for latency-sensitive applications as shown in our benchmark below. All these features are integrated into the 🤗 [Optimum Habana](https://github.com/huggingface/optimum-habana) library so that deploying your model on Gaudi is very simple. Check out the quick-start page [here](https://huggingface.co/docs/optimum/habana/quickstart). If you would like to get access to Gaudi2, go to the [Intel Developer Cloud](https://www.intel.com/content/www/us/en/secure/developer/devcloud/cloud-launchpad.html) and follow [this guide](https://huggingface.co/blog/habana-gaudi-2-benchmark#how-to-get-access-to-gaudi2). ## Benchmarks In this section, we are going to provide an early benchmark of BLOOMZ on Gaudi2, first-generation Gaudi and Nvidia A100 80GB. Although these devices have quite a lot of memory, the model is so large that a single device is not enough to contain a single instance of BLOOMZ. To solve this issue, we are going to use [DeepSpeed](https://www.deepspeed.ai/), which is a deep learning optimization library that enables many memory and speed improvements to accelerate the model and make it fit the device. In particular, we rely here on [DeepSpeed-inference](https://arxiv.org/abs/2207.00032): it introduces several features such as [model (or pipeline) parallelism](https://huggingface.co/blog/bloom-megatron-deepspeed#pipeline-parallelism) to make the most of the available devices. For Gaudi2, we use [Habana's DeepSpeed fork](https://github.com/HabanaAI/deepspeed) that adds support for HPUs. ### Latency We measured latencies (batch of one sample) for two different sizes of BLOOMZ, both with multi-billion parameters: - [176 billion](https://huggingface.co/bigscience/bloomz) parameters - [7 billion](https://huggingface.co/bigscience/bloomz-7b1) parameters Runs were performed with DeepSpeed-inference in 16-bit precision with 8 devices and using a [key-value cache](https://huggingface.co/docs/transformers/v4.27.1/en/model_doc/bloom#transformers.BloomForCausalLM.forward.use_cache). Note that while [CUDA graphs](https://developer.nvidia.com/blog/cuda-graphs/) are not currently compatible with model parallelism in DeepSpeed (DeepSpeed v0.8.2, see [here](https://github.com/microsoft/DeepSpeed/blob/v0.8.2/deepspeed/inference/engine.py#L158)), HPU graphs are supported in Habana's DeepSpeed fork. All benchmarks are doing [greedy generation](https://huggingface.co/blog/how-to-generate#greedy-search) of 100 token outputs. The input prompt is: > "DeepSpeed is a machine learning framework" which consists of 7 tokens with BLOOM's tokenizer. The results for inference latency are displayed in the table below (the unit is *seconds*). | Model | Number of devices | Gaudi2 latency (seconds) | A100-80GB latency (seconds) | First-gen Gaudi latency (seconds) | |:-----------:|:-----------------:|:-------------------------:|:-----------------:|:----------------------------------:| | BLOOMZ | 8 | 3.103 | 4.402 | / | | BLOOMZ-7B | 8 | 0.734 | 2.417 | 3.321 | | BLOOMZ-7B | 1 | 0.772 | 2.119 | 2.387 | *Update: the numbers above were updated with the releases of Optimum Habana 1.6 and SynapseAI 1.10, leading to a* x*1.42 speedup on BLOOMZ with Gaudi2 compared to A100.* The Habana team recently introduced support for DeepSpeed-inference in SynapseAI 1.8, and thereby quickly enabled inference for 100+ billion parameter models. **For the 176-billion-parameter checkpoint, Gaudi2 is 1.42x faster than A100 80GB**. Smaller checkpoints present interesting results too. **Gaudi2 is 2.89x faster than A100 for BLOOMZ-7B!** It is also interesting to note that it manages to benefit from model parallelism whereas A100 is faster on a single device. We also ran these models on first-gen Gaudi. While it is slower than Gaudi2, it is interesting from a price perspective as a DL1 instance on AWS costs approximately 13\$ per hour. Latency for BLOOMZ-7B on first-gen Gaudi is 2.387 seconds. Thus, **first-gen Gaudi offers for the 7-billion checkpoint a better price-performance ratio than A100** which costs more than 30\$ per hour! We expect the Habana team will optimize the performance of these models in the upcoming SynapseAI releases. For example, in our last benchmark, we saw that [Gaudi2 performs Stable Diffusion inference 2.2x faster than A100](https://huggingface.co/blog/habana-gaudi-2-benchmark#generating-images-from-text-with-stable-diffusion) and this has since been improved further to 2.37x with the latest optimizations provided by Habana. We will update these numbers as new versions of SynapseAI are released and integrated within Optimum Habana. ### Running inference on a complete dataset The script we wrote enables using your model to complete sentences over a whole dataset. This is useful to try BLOOMZ inference on Gaudi2 on your own data. Here is an example with the [*tldr_news*](https://huggingface.co/datasets/JulesBelveze/tldr_news/viewer/all/test) dataset. It contains both the headline and content of several articles (you can visualize it on the Hugging Face Hub). We kept only the *content* column and truncated each sample to the first 16 tokens so that the model generates the rest of the sequence with 50 new tokens. The first five samples look like: ``` Batch n°1 Input: ['Facebook has released a report that shows what content was most widely viewed by Americans between'] Output: ['Facebook has released a report that shows what content was most widely viewed by Americans between January and June of this year. The report, which is based on data from the company’s mobile advertising platform, shows that the most popular content on Facebook was news, followed by sports, entertainment, and politics. The report also shows that the most'] -------------------------------------------------------------------------------------------------- Batch n°2 Input: ['A quantum effect called superabsorption allows a collection of molecules to absorb light more'] Output: ['A quantum effect called superabsorption allows a collection of molecules to absorb light more strongly than the sum of the individual absorptions of the molecules. This effect is due to the coherent interaction of the molecules with the electromagnetic field. The superabsorption effect has been observed in a number of systems, including liquid crystals, liquid crystals in'] -------------------------------------------------------------------------------------------------- Batch n°3 Input: ['A SpaceX Starship rocket prototype has exploded during a pressure test. It was'] Output: ['A SpaceX Starship rocket prototype has exploded during a pressure test. It was the first time a Starship prototype had been tested in the air. The explosion occurred at the SpaceX facility in Boca Chica, Texas. The Starship prototype was being tested for its ability to withstand the pressure of flight. The explosion occurred at'] -------------------------------------------------------------------------------------------------- Batch n°4 Input: ['Scalene is a high-performance CPU and memory profiler for Python.'] Output: ['Scalene is a high-performance CPU and memory profiler for Python. It is designed to be a lightweight, portable, and easy-to-use profiler. Scalene is a Python package that can be installed on any platform that supports Python. Scalene is a lightweight, portable, and easy-to-use profiler'] -------------------------------------------------------------------------------------------------- Batch n°5 Input: ['With the rise of cheap small "Cube Satellites", startups are now'] Output: ['With the rise of cheap small "Cube Satellites", startups are now able to launch their own satellites for a fraction of the cost of a traditional launch. This has led to a proliferation of small satellites, which are now being used for a wide range of applications. The most common use of small satellites is for communications,'] ``` In the next section, we explain how to use the script we wrote to perform this benchmark or to apply it on any dataset you like from the Hugging Face Hub! ### How to reproduce these results? The script used for benchmarking BLOOMZ on Gaudi2 and first-gen Gaudi is available [here](https://github.com/huggingface/optimum-habana/tree/main/examples/text-generation). Before running it, please make sure that the latest versions of SynapseAI and the Gaudi drivers are installed following [the instructions given by Habana](https://docs.habana.ai/en/latest/Installation_Guide/index.html). Then, run the following: ```bash git clone https://github.com/huggingface/optimum-habana.git cd optimum-habana && pip install . && cd examples/text-generation pip install git+https://github.com/HabanaAI/DeepSpeed.git@1.9.0 ``` Finally, you can launch the script as follows: ```bash python ../gaudi_spawn.py --use_deepspeed --world_size 8 run_generation.py --model_name_or_path bigscience/bloomz --use_hpu_graphs --use_kv_cache --max_new_tokens 100 ``` For multi-node inference, you can follow [this guide](https://huggingface.co/docs/optimum/habana/usage_guides/multi_node_training) from the documentation of Optimum Habana. You can also load any dataset from the Hugging Face Hub to get prompts that will be used for generation using the argument `--dataset_name my_dataset_name`. This benchmark was performed with Transformers v4.28.1, SynapseAI v1.9.0 and Optimum Habana v1.5.0. For GPUs, [here](https://github.com/huggingface/transformers-bloom-inference/blob/main/bloom-inference-scripts/bloom-ds-inference.py) is the script that led to the results that were previously presented in [this blog post](https://huggingface.co/blog/bloom-inference-pytorch-scripts) (and [here](https://github.com/huggingface/transformers-bloom-inference/tree/main/bloom-inference-scripts#deepspeed-inference) are the instructions to use it). To use CUDA graphs, static shapes are necessary and this is not supported in 🤗 Transformers. You can use [this repo](https://github.com/HabanaAI/Model-References/tree/1.8.0/PyTorch/nlp/bloom) written by the Habana team to enable them. ## Conclusion We see in this article that **Habana Gaudi2 performs BLOOMZ inference faster than Nvidia A100 80GB**. And there is no need to write a complicated script as 🤗 [Optimum Habana](https://huggingface.co/docs/optimum/habana/index) provides easy-to-use tools to run inference with multi-billion-parameter models on HPUs. Future releases of Habana's SynapseAI SDK are expected to speed up performance, so we will update this benchmark regularly as LLM inference optimizations on SynapseAI continue to advance. We are also looking forward to the performance benefits that will come with FP8 inference on Gaudi2. We also presented the results achieved with first-generation Gaudi. For smaller models, it can perform on par with or even better than A100 for almost a third of its price. It is a good alternative option to using GPUs for running inference with such a big model like BLOOMZ. If you are interested in accelerating your Machine Learning training and inference workflows using the latest AI hardware accelerators and software libraries, check out our [Expert Acceleration Program](https://huggingface.co/support). To learn more about Habana solutions, [read about our partnership and contact them here](https://huggingface.co/hardware/habana). To learn more about Hugging Face efforts to make AI hardware accelerators easy to use, check out our [Hardware Partner Program](https://huggingface.co/hardware). ### Related Topics - [Faster Training and Inference: Habana Gaudi-2 vs Nvidia A100 80GB](https://huggingface.co/blog/habana-gaudi-2-benchmark) - [Leverage DeepSpeed to Train Faster and Cheaper Large Scale Transformer Models with Hugging Face and Habana Labs Gaudi](https://developer.habana.ai/events/leverage-deepspeed-to-train-faster-and-cheaper-large-scale-transformer-models-with-hugging-face-and-habana-labs-gaudi/) --- Thanks for reading! If you have any questions, feel free to contact me, either through [Github](https://github.com/huggingface/optimum-habana) or on the [forum](https://discuss.huggingface.co/c/optimum/59). You can also connect with me on [LinkedIn](https://www.linkedin.com/in/regispierrard/). [^1]: “Zero-shot” refers to the ability of a model to complete a task on new or unseen input data, i.e. without having been provided any training examples of this kind of data. We provide the model with a prompt and a sequence of text that describes what we want our model to do, in natural language. Zero-shot classification excludes any examples of the desired task being completed. This differs from single or few-shot classification, as these tasks include a single or a few examples of the selected task.
huggingface/blog/blob/main/habana-gaudi-2-bloom.md
# MM-IMDb Based on the script [`run_mmimdb.py`](https://github.com/huggingface/transformers/blob/main/examples/research_projects/mm-imdb/run_mmimdb.py). [MM-IMDb](http://lisi1.unal.edu.co/mmimdb/) is a Multimodal dataset with around 26,000 movies including images, plots and other metadata. ### Training on MM-IMDb ``` python run_mmimdb.py \ --data_dir /path/to/mmimdb/dataset/ \ --model_type bert \ --model_name_or_path bert-base-uncased \ --output_dir /path/to/save/dir/ \ --do_train \ --do_eval \ --max_seq_len 512 \ --gradient_accumulation_steps 20 \ --num_image_embeds 3 \ --num_train_epochs 100 \ --patience 5 ```
huggingface/transformers/blob/main/examples/research_projects/mm-imdb/README.md
@gradio/tabitem ## 0.1.0 ### Features - [#5498](https://github.com/gradio-app/gradio/pull/5498) [`287fe6782`](https://github.com/gradio-app/gradio/commit/287fe6782825479513e79a5cf0ba0fbfe51443d7) - Publish all components to npm. Thanks [@pngwn](https://github.com/pngwn)! ## 0.1.0-beta.8 ### Patch Changes - Updated dependencies [[`667802a6c`](https://github.com/gradio-app/gradio/commit/667802a6cdbfb2ce454a3be5a78e0990b194548a)]: - @gradio/utils@0.2.0-beta.6 - @gradio/tabs@0.1.0-beta.8 ## 0.1.0-beta.7 ### Features - [#6016](https://github.com/gradio-app/gradio/pull/6016) [`83e947676`](https://github.com/gradio-app/gradio/commit/83e947676d327ca2ab6ae2a2d710c78961c771a0) - Format js in v4 branch. Thanks [@freddyaboulton](https://github.com/freddyaboulton)! ## 0.1.0-beta.6 ### Features - [#5960](https://github.com/gradio-app/gradio/pull/5960) [`319c30f3f`](https://github.com/gradio-app/gradio/commit/319c30f3fccf23bfe1da6c9b132a6a99d59652f7) - rererefactor frontend files. Thanks [@pngwn](https://github.com/pngwn)! - [#5938](https://github.com/gradio-app/gradio/pull/5938) [`13ed8a485`](https://github.com/gradio-app/gradio/commit/13ed8a485d5e31d7d75af87fe8654b661edcca93) - V4: Use beta release versions for '@gradio' packages. Thanks [@freddyaboulton](https://github.com/freddyaboulton)! ## 0.0.6 ### Patch Changes - Updated dependencies []: - @gradio/utils@0.1.2 - @gradio/tabs@0.0.7 ## 0.0.5 ### Features - [#5590](https://github.com/gradio-app/gradio/pull/5590) [`d1ad1f671`](https://github.com/gradio-app/gradio/commit/d1ad1f671caef9f226eb3965f39164c256d8615c) - Attach `elem_classes` selectors to layout elements, and an id to the Tab button (for targeting via CSS/JS). Thanks [@abidlabs](https://github.com/abidlabs)! ## 0.0.4 ### Patch Changes - Updated dependencies []: - @gradio/utils@0.1.1 - @gradio/tabs@0.0.5 ## 0.0.3 ### Patch Changes - Updated dependencies [[`abf1c57d`](https://github.com/gradio-app/gradio/commit/abf1c57d7d85de0df233ee3b38aeb38b638477db)]: - @gradio/utils@0.1.0 - @gradio/tabs@0.0.4 ## 0.0.2 ### Highlights #### Improve startup performance and markdown support ([#5279](https://github.com/gradio-app/gradio/pull/5279) [`fe057300`](https://github.com/gradio-app/gradio/commit/fe057300f0672c62dab9d9b4501054ac5d45a4ec)) ##### Improved markdown support We now have better support for markdown in `gr.Markdown` and `gr.Dataframe`. Including syntax highlighting and Github Flavoured Markdown. We also have more consistent markdown behaviour and styling. ##### Various performance improvements These improvements will be particularly beneficial to large applications. - Rather than attaching events manually, they are now delegated, leading to a significant performance improvement and addressing a performance regression introduced in a recent version of Gradio. App startup for large applications is now around twice as fast. - Optimised the mounting of individual components, leading to a modest performance improvement during startup (~30%). - Corrected an issue that was causing markdown to re-render infinitely. - Ensured that the `gr.3DModel` does re-render prematurely. Thanks [@pngwn](https://github.com/pngwn)!
gradio-app/gradio/blob/main/js/tabitem/CHANGELOG.md
-- title: Deploying 🤗 ViT on Kubernetes with TF Serving thumbnail: /blog/assets/94_tf_serving_kubernetes/thumb.png authors: - user: chansung guest: true - user: sayakpaul guest: true --- # Deploying 🤗 ViT on Kubernetes with TF Serving # Introduction In the [<u>previous post</u>](https://huggingface.co/blog/tf-serving-vision), we showed how to deploy a [<u>Vision Transformer (ViT)</u>](https://huggingface.co/docs/transformers/main/en/model_doc/vit) model from 🤗 Transformers locally with TensorFlow Serving. We covered topics like embedding preprocessing and postprocessing operations within the Vision Transformer model, handling gRPC requests, and more! While local deployments are an excellent head start to building something useful, you’d need to perform deployments that can serve many users in real-life projects. In this post, you’ll learn how to scale the local deployment from the previous post with Docker and Kubernetes. Therefore, we assume some familiarity with Docker and Kubernetes. This post builds on top of the [<u>previous post</u>](https://huggingface.co/blog/tf-serving-vision), so, we highly recommend reading it first. You can find all the code discussed throughout this post in [<u>this repository</u>](https://github.com/sayakpaul/deploy-hf-tf-vision-models/tree/main/hf_vision_model_onnx_gke). # Why go with Docker and Kubernetes? The basic workflow of scaling up a deployment like ours includes the following steps: - **Containerizing the application logic**: The application logic involves a served model that can handle requests and return predictions. For containerization, Docker is the industry-standard go-to. - **Deploying the Docker container**: You have various options here. The most widely used option is deploying the Docker container on a Kubernetes cluster. Kubernetes provides numerous deployment-friendly features (e.g. autoscaling and security). You can use a solution like [<u>Minikube</u>](https://minikube.sigs.k8s.io/docs/start/) to manage Kubernetes clusters locally or a serverless solution like [<u>Elastic Kubernetes Service (EKS)</u>](https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html). You might be wondering why use an explicit setup like this in the age of [<u>Sagemaker,</u>](https://aws.amazon.com/sagemaker/) [<u>Vertex AI</u>](https://cloud.google.com/vertex-ai) that provides ML deployment-specific features right off the bat. It is fair to think about it. The above workflow is widely adopted in the industry, and many organizations benefit from it. It has already been battle-tested for many years. It also lets you have more granular control of your deployments while abstracting away the non-trivial bits. This post uses [<u>Google Kubernetes Engine (GKE)</u>](https://cloud.google.com/kubernetes-engine) to provision and manage a Kubernetes cluster. We assume you already have a billing-enabled GCP project if you’re using GKE. Also, note that you’d need to configure the [`gcloud`](https://cloud.google.com/sdk/gcloud) utility for performing the deployment on GKE. But the concepts discussed in this post equally apply should you decide to use Minikube. **Note**: The code snippets shown in this post can be executed on a Unix terminal as long as you have configured the `gcloud` utility along with Docker and `kubectl`. More instructions are available in the [accompanying repository](https://github.com/sayakpaul/deploy-hf-tf-vision-models/tree/main/hf_vision_model_onnx_gke). # Containerization with Docker The serving model can handle raw image inputs as bytes and is capable of preprocessing and postprocessing. In this section, you’ll see how to containerize that model using the [<u>base TensorFlow Serving Image</u>](http://hub.docker.com/r/tensorflow/serving/tags/). TensorFlow Serving consumes models in the [`SavedModel`](https://www.tensorflow.org/guide/saved_model) format. Recall how you obtained such a `SavedModel` in the [<u>previous post</u>](https://huggingface.co/blog/tf-serving-vision). We assume that you have the `SavedModel` compressed in `tar.gz` format. You can fetch it from [<u>here</u>](https://huggingface.co/deploy-hf-tf-vit/vit-base16-extended/resolve/main/saved_model.tar.gz) just in case. Then `SavedModel` should be placed in the special directory structure of `<MODEL_NAME>/<VERSION>/<SavedModel>`. This is how TensorFlow Serving simultaneously manages multiple deployments of different versioned models. ## Preparing the Docker image The shell script below places the `SavedModel` in `hf-vit/1` under the parent directory models. You'll copy everything inside it when preparing the Docker image. There is only one model in this example, but this is a more generalizable approach. ```bash $ MODEL_TAR=model.tar.gz $ MODEL_NAME=hf-vit $ MODEL_VERSION=1 $ MODEL_PATH=models/$MODEL_NAME/$MODEL_VERSION $ mkdir -p $MODEL_PATH $ tar -xvf $MODEL_TAR --directory $MODEL_PATH ``` Below, we show how the `models` directory is structured in our case: ```bash $ find /models /models /models/hf-vit /models/hf-vit/1 /models/hf-vit/1/keras_metadata.pb /models/hf-vit/1/variables /models/hf-vit/1/variables/variables.index /models/hf-vit/1/variables/variables.data-00000-of-00001 /models/hf-vit/1/assets /models/hf-vit/1/saved_model.pb ``` The custom TensorFlow Serving image should be built on top of the [base one](http://hub.docker.com/r/tensorflow/serving/tags/). There are various approaches for this, but you’ll do this by running a Docker container as illustrated in the [<u>official document</u>](https://www.tensorflow.org/tfx/serving/serving_kubernetes#commit_image_for_deployment). We start by running `tensorflow/serving` image in background mode, then the entire `models` directory is copied to the running container as below. ```bash $ docker run -d --name serving_base tensorflow/serving $ docker cp models/ serving_base:/models/ ``` We used the official Docker image of TensorFlow Serving as the base, but you can use ones that you have [<u>built from source</u>](https://github.com/tensorflow/serving/blob/master/tensorflow_serving/g3doc/setup.md#building-from-source) as well. **Note**: TensorFlow Serving benefits from hardware optimizations that leverage instruction sets such as [<u>AVX512</u>](https://en.wikipedia.org/wiki/AVX-512). These instruction sets can [<u>speed up deep learning model inference</u>](https://huggingface.co/blog/bert-cpu-scaling-part-1). So, if you know the hardware on which the model will be deployed, it’s often beneficial to obtain an optimized build of the TensorFlow Serving image and use it throughout. Now that the running container has all the required files in the appropriate directory structure, we need to create a new Docker image that includes these changes. This can be done with the [`docker commit`](https://docs.docker.com/engine/reference/commandline/commit/) command below, and you'll have a new Docker image named `$NEW_IMAGE`. One important thing to note is that you need to set the `MODEL_NAME` environment variable to the model name, which is `hf-vit` in this case. This tells TensorFlow Serving what model to deploy. ```bash $ NEW_IMAGE=tfserving:$MODEL_NAME $ docker commit \ --change "ENV MODEL_NAME $MODEL_NAME" \ serving_base $NEW_IMAGE ``` ## Running the Docker image locally Lastly, you can run the newly built Docker image locally to see if it works fine. Below you see the output of the `docker run` command. Since the output is verbose, we trimmed it down to focus on the important bits. Also, it is worth noting that it opens up `8500` and `8501` ports for gRPC and HTTP/REST endpoints, respectively. ```shell $ docker run -p 8500:8500 -p 8501:8501 -t $NEW_IMAGE & ---------OUTPUT--------- (Re-)adding model: hf-vit Successfully reserved resources to load servable {name: hf-vit version: 1} Approving load for servable version {name: hf-vit version: 1} Loading servable version {name: hf-vit version: 1} Reading SavedModel from: /models/hf-vit/1 Reading SavedModel debug info (if present) from: /models/hf-vit/1 Successfully loaded servable version {name: hf-vit version: 1} Running gRPC ModelServer at 0.0.0.0:8500 ... Exporting HTTP/REST API at:localhost:8501 ... ``` ## Pushing the Docker image The final step here is to push the Docker image to an image repository. You'll use [<u>Google Container Registry (GCR)</u>](https://cloud.google.com/container-registry) for this purpose. The following lines of code can do this for you: ```bash $ GCP_PROJECT_ID=<GCP_PROJECT_ID> $ GCP_IMAGE=gcr.io/$GCP_PROJECT_ID/$NEW_IMAGE $ gcloud auth configure-docker $ docker tag $NEW_IMAGE $GCP_IMAGE $ docker push $GCP_IMAGE ``` Since we’re using GCR, you need to prefix the Docker image tag ([<u>note</u>](https://cloud.google.com/container-registry/docs/pushing-and-pulling) the other formats too) with `gcr.io/<GCP_PROJECT_ID>` . With the Docker image prepared and pushed to GCR, you can now proceed to deploy it on a Kubernetes cluster. # Deploying on a Kubernetes cluster Deployment on a Kubernetes cluster requires the following: - Provisioning a Kubernetes cluster, done with [<u>Google Kubernetes Engine</u>](https://cloud.google.com/kubernetes-engine) (GKE) in this post. However, you’re welcome to use other platforms and tools like EKS or Minikube. - Connecting to the Kubernetes cluster to perform a deployment. - Writing YAML manifests. - Performing deployment with the manifests with a utility tool, [`kubectl`](https://kubernetes.io/docs/reference/kubectl/). Let’s go over each of these steps. ## Provisioning a Kubernetes cluster on GKE You can use a shell script like so for this (available [<u>here</u>](https://github.com/sayakpaul/deploy-hf-tf-vision-models/blob/main/hf_vision_model_tfserving_gke/provision_gke_cluster.sh)): ```bash $ GKE_CLUSTER_NAME=tfs-cluster $ GKE_CLUSTER_ZONE=us-central1-a $ NUM_NODES=2 $ MACHINE_TYPE=n1-standard-8 $ gcloud container clusters create $GKE_CLUSTER_NAME \ --zone=$GKE_CLUSTER_ZONE \ --machine-type=$MACHINE_TYPE \ --num-nodes=$NUM_NODES ``` GCP offers a variety of machine types to configure the deployment in a way you want. We encourage you to refer to the [<u>documentation</u>](https://cloud.google.com/sdk/gcloud/reference/container/clusters/create) to learn more about it. Once the cluster is provisioned, you need to connect to it to perform the deployment. Since GKE is used here, you also need to authenticate yourself. You can use a shell script like so to do both of these: ```bash $ GCP_PROJECT_ID=<GCP_PROJECT_ID> $ export USE_GKE_GCLOUD_AUTH_PLUGIN=True $ gcloud container clusters get-credentials $GKE_CLUSTER_NAME \ --zone $GKE_CLUSTER_ZONE \ --project $GCP_PROJECT_ID ``` The `gcloud container clusters get-credentials` command takes care of both connecting to the cluster and authentication. Once this is done, you’re ready to write the manifests. ## Writing Kubernetes manifests Kubernetes manifests are written in [<u>YAML</u>](https://yaml.org/) files. While it’s possible to use a single manifest file to perform the deployment, creating separate manifest files is often beneficial for delegating the separation of concerns. It’s common to use three manifest files for achieving this: - `deployment.yaml` defines the desired state of the Deployment by providing the name of the Docker image, additional arguments when running the Docker image, the ports to open for external accesses, and the limits of resources. - `service.yaml` defines connections between external clients and inside Pods in the Kubernetes cluster. - `hpa.yaml` defines rules to scale up and down the number of Pods consisting of the Deployment, such as the percentage of CPU utilization. You can find the relevant manifests for this post [<u>here</u>](https://github.com/sayakpaul/deploy-hf-tf-vision-models/tree/main/hf_vision_model_tfserving_gke/.kube/base). Below, we present a pictorial overview of how these manifests are consumed. ![](./assets/94_tf_serving_kubernetes/manifest_propagation.png) Next, we go through the important parts of each of these manifests. **`deployment.yaml`**: ```yaml apiVersion: apps/v1 kind: Deployment metadata: labels: app: tfs-server name: tfs-server ... spec: containers: - image: gcr.io/$GCP_PROJECT_ID/tfserving-hf-vit:latest name: tfs-k8s imagePullPolicy: Always args: ["--tensorflow_inter_op_parallelism=2", "--tensorflow_intra_op_parallelism=8"] ports: - containerPort: 8500 name: grpc - containerPort: 8501 name: restapi resources: limits: cpu: 800m requests: cpu: 800m ... ``` You can configure the names like `tfs-server`, `tfs-k8s` any way you want. Under `containers`, you specify the Docker image URI the deployment will use. The current resource utilization gets monitored by setting the allowed bounds of the `resources` for the container. It can let Horizontal Pod Autoscaler (discussed later) decide to scale up or down the number of containers. `requests.cpu` is the minimal amount of CPU resources to make the container work correctly set by operators. Here 800m means 80% of the whole CPU resource. So, HPA monitors the average CPU utilization out of the sum of `requests.cpu` across all Pods to make scaling decisions. Besides Kubernetes specific configuration, you can specify TensorFlow Serving specific options in `args`.In this case, you have two: - `tensorflow_inter_op_parallelism`, which sets the number of threads to run in parallel to execute independent operations. The recommended value for this is 2. - `tensorflow_intra_op_parallelism`, which sets the number of threads to run in parallel to execute individual operations. The recommended value is the number of physical cores the deployment CPU has. You can learn more about these options (and others) and tips on tuning them for deployment from [<u>here</u>](https://www.tensorflow.org/tfx/serving/performance) and [<u>here</u>](https://github.com/IntelAI/models/blob/master/docs/general/tensorflow_serving/GeneralBestPractices.md). **`service.yaml`**: ```yaml apiVersion: v1 kind: Service metadata: labels: app: tfs-server name: tfs-server spec: ports: - port: 8500 protocol: TCP targetPort: 8500 name: tf-serving-grpc - port: 8501 protocol: TCP targetPort: 8501 name: tf-serving-restapi selector: app: tfs-server type: LoadBalancer ``` We made the service type ‘LoadBalancer’ so the endpoints are exposed externally to the Kubernetes cluster. It selects the ‘tfs-server’ Deployment to make connections with external clients via the specified ports. We open two ports of ‘8500’ and ‘8501’ for gRPC and HTTP/REST connections respectively. **`hpa.yaml`**: ```yaml apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: tfs-server spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: tfs-server minReplicas: 1 maxReplicas: 3 targetCPUUtilizationPercentage: 80 ``` HPA stands for **H**orizontal **P**od **A**utoscaler. It sets criteria to decide when to scale the number of Pods in the target Deployment. You can learn more about the autoscaling algorithm internally used by Kubernetes [<u>here</u>](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale). Here you specify how Kubernetes should handle autoscaling. In particular, you define the replica bound within which it should perform autoscaling – `minReplicas\` and `maxReplicas` and the target CPU utilization. `targetCPUUtilizationPercentage` is an important metric for autoscaling. The following thread aptly summarizes what it means (taken from [<u>here</u>](https://stackoverflow.com/a/42530520/7636462)): > The CPU utilization is the average CPU usage of all Pods in a deployment across the last minute divided by the requested CPU of this deployment. If the mean of the Pods' CPU utilization is higher than the target you defined, your replicas will be adjusted. Recall specifying `resources` in the deployment manifest. By specifying the `resources`, the Kubernetes control plane starts monitoring the metrics, so the `targetCPUUtilization` works. Otherwise, HPA doesn't know the current status of the Deployment. You can experiment and set these to the required numbers based on your requirements. Note, however, that autoscaling will be contingent on the quota you have available on GCP since GKE internally uses [<u>Google Compute Engine</u>](https://cloud.google.com/compute) to manage these resources. ## Performing the deployment Once the manifests are ready, you can apply them to the currently connected Kubernetes cluster with the [`kubectl apply`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply) command. ```bash $ kubectl apply -f deployment.yaml $ kubectl apply -f service.yaml $ kubectl apply -f hpa.yaml ``` While using `kubectl` is fine for applying each of the manifests to perform the deployment, it can quickly become harder if you have many different manifests. This is where a utility like [<u>Kustomize</u>](https://kustomize.io/) can be helpful. You simply define another specification named `kustomization.yaml` like so: ```yaml commonLabels: app: tfs-server resources: - deployment.yaml - hpa.yaml - service.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization ``` Then it’s just a one-liner to perform the actual deployment: ```bash $ kustomize build . | kubectl apply -f - ``` Complete instructions are available [<u>here</u>](https://github.com/sayakpaul/deploy-hf-tf-vision-models/tree/main/hf_vision_model_tfserving_gke). Once the deployment has been performed, we can retrieve the endpoint IP like so: ```bash $ kubectl rollout status deployment/tfs-server $ kubectl get svc tfs-server --watch ---------OUTPUT--------- NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE tfs-server LoadBalancer xxxxxxxxxx xxxxxxxxxx 8500:30869/TCP,8501:31469/TCP xxx ``` Note down the external IP when it becomes available. And that sums up all the steps you need to deploy your model on Kubernetes! Kubernetes elegantly provides abstractions for complex bits like autoscaling and cluster management while letting you focus on the crucial aspects you should care about while deploying a model. These include resource utilization, security (we didn’t cover that here), performance north stars like latency, etc. # Testing the endpoint Given that you got an external IP for the endpoint, you can use the following listing to test it: ```py import tensorflow as tf import json import base64 image_path = tf.keras.utils.get_file( "image.jpg", "http://images.cocodataset.org/val2017/000000039769.jpg" ) bytes_inputs = tf.io.read_file(image_path) b64str = base64.urlsafe_b64encode(bytes_inputs.numpy()).decode("utf-8") data = json.dumps( {"signature_name": "serving_default", "instances": [b64str]} ) json_response = requests.post( "http://<ENDPOINT-IP>:8501/v1/models/hf-vit:predict", headers={"content-type": "application/json"}, data=data ) print(json.loads(json_response.text)) ---------OUTPUT--------- {'predictions': [{'label': 'Egyptian cat', 'confidence': 0.896659195}]} ``` If you’re interested to know how this deployment would perform if it meets more traffic then we recommend you to check [<u>this article</u>](https://blog.tensorflow.org/2022/07/load-testing-TensorFlow-Servings-REST-interface.html). Refer to the corresponding [<u>repository</u>](https://github.com/sayakpaul/deploy-hf-tf-vision-models/tree/main/locust) to know more about running load tests with Locust and visualize the results. # Notes on different TF Serving configurations TensorFlow Serving [<u>provides</u>](https://www.tensorflow.org/tfx/serving/serving_config) various options to tailor the deployment based on your application use case. Below, we briefly discuss some of them. **`enable_batching`** enables the batch inference capability that collects incoming requests with a certain amount of timing window, collates them as a batch, performs a batch inference, and returns the results of each request to the appropriate clients. TensorFlow Serving provides a rich set of configurable options (such as `max_batch_size`, `num_batch_threads`) to tailor your deployment needs. You can learn more about them [<u>here</u>](https://github.com/tensorflow/serving/blob/master/tensorflow_serving/batching/README.md). Batching is particularly beneficial for applications where you don't need predictions from a model instantly. In those cases, you'd typically gather together multiple samples for prediction in batches and then send those batches for prediction. Lucky for us, TensorFlow Serving can configure all of these automatically when we enable its batching capabilities. **`enable_model_warmup`** warms up some of the TensorFlow components that are lazily instantiated with dummy input data. This way, you can ensure everything is appropriately loaded up and that there will be no lags during the actual service time. # Conclusion In this post and the associated [repository](https://github.com/sayakpaul/deploy-hf-tf-vision-models), you learned about deploying the Vision Transformer model from 🤗 Transformers on a Kubernetes cluster. If you’re doing this for the first time, the steps may appear to be a little daunting, but once you get the grasp, they’ll soon become an essential component of your toolbox. If you were already familiar with this workflow, we hope this post was still beneficial for you. We applied the same deployment workflow for an ONNX-optimized version of the same Vision Transformer model. For more details, check out [this link](https://github.com/sayakpaul/deploy-hf-tf-vision-models/tree/main/hf_vision_model_onnx_gke). ONNX-optimized models are especially beneficial if you're using x86 CPUs for deployment. In the next post, we’ll show you how to perform these deployments with significantly less code with [<u>Vertex AI</u>](https://cloud.google.com/vertex-ai) – more like `model.deploy(autoscaling_config=...)` and boom! We hope you’re just as excited as we are. # Acknowledgement Thanks to the ML Developer Relations Program team at Google, which provided us with GCP credits for conducting the experiments.
huggingface/blog/blob/main/deploy-tfserving-kubernetes.md
从 BigQuery 数据创建实时仪表盘 Tags: 表格 , 仪表盘 , 绘图 [Google BigQuery](https://cloud.google.com/bigquery) 是一个基于云的用于处理大规模数据集的服务。它是一个无服务器且高度可扩展的数据仓库解决方案,使用户能够使用类似 SQL 的查询分析数据。 在本教程中,我们将向您展示如何使用 `gradio` 在 Python 中查询 BigQuery 数据集并在实时仪表盘中显示数据。仪表板将如下所示: <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/bigquery-dashboard.gif"> 在本指南中,我们将介绍以下步骤: 1. 设置 BigQuery 凭据 2. 使用 BigQuery 客户端 3. 构建实时仪表盘(仅需 _7 行 Python 代码_) 我们将使用[纽约时报的 COVID 数据集](https://www.nytimes.com/interactive/2021/us/covid-cases.html),该数据集作为一个公共数据集可在 BigQuery 上使用。数据集名为 `covid19_nyt.us_counties`,其中包含有关美国各县 COVID 确诊病例和死亡人数的最新信息。 **先决条件**:本指南使用 [Gradio Blocks](../quickstart/#blocks-more-flexibility-and-control),因此请确保您熟悉 Blocks 类。 ## 设置 BigQuery 凭据 要使用 Gradio 和 BigQuery,您需要获取您的 BigQuery 凭据,并将其与 [BigQuery Python 客户端](https://pypi.org/project/google-cloud-bigquery/) 一起使用。如果您已经拥有 BigQuery 凭据(作为 `.json` 文件),则可以跳过此部分。否则,您可以在几分钟内免费完成此操作。 1. 首先,登录到您的 Google Cloud 帐户,并转到 Google Cloud 控制台 (https://console.cloud.google.com/) 2. 在 Cloud 控制台中,单击左上角的汉堡菜单,然后从菜单中选择“API 与服务”。如果您没有现有项目,则需要创建一个项目。 3. 然后,单击“+ 启用的 API 与服务”按钮,该按钮允许您为项目启用特定服务。搜索“BigQuery API”,单击它,然后单击“启用”按钮。如果您看到“管理”按钮,则表示 BigQuery 已启用,您已准备就绪。 4. 在“API 与服务”菜单中,单击“凭据”选项卡,然后单击“创建凭据”按钮。 5. 在“创建凭据”对话框中,选择“服务帐号密钥”作为要创建的凭据类型,并为其命名。还可以通过为其授予角色(例如“BigQuery 用户”)为服务帐号授予权限,从而允许您运行查询。 6. 在选择服务帐号后,选择“JSON”密钥类型,然后单击“创建”按钮。这将下载包含您凭据的 JSON 密钥文件到您的计算机。它的外观类似于以下内容: ```json { "type": "service_account", "project_id": "your project", "private_key_id": "your private key id", "private_key": "private key", "client_email": "email", "client_id": "client id", "auth_uri": "https://accounts.google.com/o/oauth2/auth", "token_uri": "https://accounts.google.com/o/oauth2/token", "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/email_id" } ``` ## 使用 BigQuery 客户端 获得凭据后,您需要使用 BigQuery Python 客户端使用您的凭据进行身份验证。为此,您需要在终端中运行以下命令安装 BigQuery Python 客户端: ```bash pip install google-cloud-bigquery[pandas] ``` 您会注意到我们已安装了 pandas 插件,这对于将 BigQuery 数据集处理为 pandas 数据帧将非常有用。安装了客户端之后,您可以通过运行以下代码使用您的凭据进行身份验证: ```py from google.cloud import bigquery client = bigquery.Client.from_service_account_json("path/to/key.json") ``` 完成凭据身份验证后,您现在可以使用 BigQuery Python 客户端与您的 BigQuery 数据集进行交互。 以下是一个示例函数,该函数在 BigQuery 中查询 `covid19_nyt.us_counties` 数据集,以显示截至当前日期的确诊人数最多的前 20 个县: ```py import numpy as np QUERY = ( 'SELECT * FROM `bigquery-public-data.covid19_nyt.us_counties` ' 'ORDER BY date DESC,confirmed_cases DESC ' 'LIMIT 20') def run_query(): query_job = client.query(QUERY) query_result = query_job.result() df = query_result.to_dataframe() # Select a subset of columns df = df[["confirmed_cases", "deaths", "county", "state_name"]] # Convert numeric columns to standard numpy types df = df.astype({"deaths": np.int64, "confirmed_cases": np.int64}) return df ``` ## 构建实时仪表盘 一旦您有了查询数据的函数,您可以使用 Gradio 库的 `gr.DataFrame` 组件以表格形式显示结果。这是一种检查数据并确保查询正确的有用方式。 以下是如何使用 `gr.DataFrame` 组件显示结果的示例。通过将 `run_query` 函数传递给 `gr.DataFrame`,我们指示 Gradio 在页面加载时立即运行该函数并显示结果。此外,您还可以传递关键字 `every`,以告知仪表板每小时刷新一次(60\*60 秒)。 ```py import gradio as gr with gr.Blocks() as demo: gr.DataFrame(run_query, every=60*60) demo.queue().launch() # Run the demo using queuing ``` 也许您想在我们的仪表盘中添加一个可视化效果。您可以使用 `gr.ScatterPlot()` 组件将数据可视化为散点图。这可以让您查看数据中不同变量(例如病例数和死亡数)之间的关系,并可用于探索数据和获取见解。同样,我们可以实时完成这一操作 通过传递 `every` 参数。 以下是一个完整示例,展示了如何在显示数据时使用 `gr.ScatterPlot` 来进行可视化。 ```py import gradio as gr with gr.Blocks() as demo: gr.Markdown("# 💉 Covid Dashboard (Updated Hourly)") with gr.Row(): gr.DataFrame(run_query, every=60*60) gr.ScatterPlot(run_query, every=60*60, x="confirmed_cases", y="deaths", tooltip="county", width=500, height=500) demo.queue().launch() # Run the demo with queuing enabled ```
gradio-app/gradio/blob/main/guides/cn/05_tabular-data-science-and-plots/creating-a-dashboard-from-bigquery-data.md
-- title: 'Welcome Stable-baselines3 to the Hugging Face Hub 🤗' thumbnail: /blog/assets/47_sb3/thumbnail.png authors: - user: ThomasSimonini --- # Welcome Stable-baselines3 to the Hugging Face Hub 🤗 At Hugging Face, we are contributing to the ecosystem for Deep Reinforcement Learning researchers and enthusiasts. That’s why we’re happy to announce that we integrated [Stable-Baselines3](https://github.com/DLR-RM/stable-baselines3) to the Hugging Face Hub. [Stable-Baselines3](https://github.com/DLR-RM/stable-baselines3) is one of the most popular PyTorch Deep Reinforcement Learning library that makes it easy to train and test your agents in a variety of environments (Gym, Atari, MuJoco, Procgen...). With this integration, you can now host your saved models 💾 and load powerful models from the community. In this article, we’re going to show how you can do it. ### Installation To use stable-baselines3 with Hugging Face Hub, you just need to install these 2 libraries: ```bash pip install huggingface_hub pip install huggingface_sb3 ``` ### Finding Models We’re currently uploading saved models of agents playing Space Invaders, Breakout, LunarLander and more. On top of this, you can find [all stable-baselines-3 models from the community here](https://huggingface.co/models?other=stable-baselines3) When you found the model you need, you just have to copy the repository id: ![Image showing how to copy a repository id](assets/47_sb3/repo_id.jpg) ### Download a model from the Hub The coolest feature of this integration is that you can now very easily load a saved model from Hub to Stable-baselines3. In order to do that you just need to copy the repo-id that contains your saved model and the name of the saved model zip file in the repo. For instance`sb3/demo-hf-CartPole-v1`: ```python import gym from huggingface_sb3 import load_from_hub from stable_baselines3 import PPO from stable_baselines3.common.evaluation import evaluate_policy # Retrieve the model from the hub ## repo_id = id of the model repository from the Hugging Face Hub (repo_id = {organization}/{repo_name}) ## filename = name of the model zip file from the repository including the extension .zip checkpoint = load_from_hub( repo_id="sb3/demo-hf-CartPole-v1", filename="ppo-CartPole-v1.zip", ) model = PPO.load(checkpoint) # Evaluate the agent and watch it eval_env = gym.make("CartPole-v1") mean_reward, std_reward = evaluate_policy( model, eval_env, render=True, n_eval_episodes=5, deterministic=True, warn=False ) print(f"mean_reward={mean_reward:.2f} +/- {std_reward}") ``` ### Sharing a model to the Hub In just a minute, you can get your saved model in the Hub. First, you need to be logged in to Hugging Face to upload a model: - If you're using Colab/Jupyter Notebooks: ````python from huggingface_hub import notebook_login notebook_login() ```` - Else: `````bash huggingface-cli login ````` Then, in this example, we train a PPO agent to play CartPole-v1 and push it to a new repo `ThomasSimonini/demo-hf-CartPole-v1` ` `````python from huggingface_sb3 import push_to_hub from stable_baselines3 import PPO # Define a PPO model with MLP policy network model = PPO("MlpPolicy", "CartPole-v1", verbose=1) # Train it for 10000 timesteps model.learn(total_timesteps=10_000) # Save the model model.save("ppo-CartPole-v1") # Push this saved model to the hf repo # If this repo does not exists it will be created ## repo_id = id of the model repository from the Hugging Face Hub (repo_id = {organization}/{repo_name}) ## filename: the name of the file == "name" inside model.save("ppo-CartPole-v1") push_to_hub( repo_id="ThomasSimonini/demo-hf-CartPole-v1", filename="ppo-CartPole-v1.zip", commit_message="Added Cartpole-v1 model trained with PPO", ) `````` Try it out and share your models with the community! ### What's next? In the coming weeks and months, we will be extending the ecosystem by: - Integrating [RL-baselines3-zoo](https://github.com/DLR-RM/rl-baselines3-zoo) - Uploading [RL-trained-agents models](https://github.com/DLR-RM/rl-trained-agents/tree/master) into the Hub: a big collection of pre-trained Reinforcement Learning agents using stable-baselines3 - Integrating other Deep Reinforcement Learning libraries - Implementing Decision Transformers 🔥 - And more to come 🥳 The best way to keep in touch is to [join our discord server](https://discord.gg/YRAq8fMnUG) to exchange with us and with the community. And if you want to dive deeper, we wrote a tutorial where you’ll learn: - How to train a Deep Reinforcement Learning lander agent to land correctly on the Moon 🌕 - How to upload it to the Hub 🚀 ![gif](assets/47_sb3/lunarlander.gif) - How to download and use a saved model from the Hub that plays Space Invaders 👾. ![gif](assets/47_sb3/spaceinvaders.gif) 👉 [The tutorial](https://github.com/huggingface/huggingface_sb3/blob/main/Stable_Baselines_3_and_Hugging_Face_%F0%9F%A4%97_tutorial.ipynb) ### Conclusion We're excited to see what you're working on with Stable-baselines3 and try your models in the Hub 😍. And we would love to hear your feedback 💖. 📧 Feel free to [reach us](mailto:thomas.simonini@huggingface.co). Finally, we would like to thank the SB3 team and in particular [Antonin Raffin](https://araffin.github.io/) for their precious help for the integration of the library 🤗. ### Would you like to integrate your library to the Hub? This integration is possible thanks to the [`huggingface_hub`](https://github.com/huggingface/huggingface_hub) library which has all our widgets and the API for all our supported libraries. If you would like to integrate your library to the Hub, we have a [guide](https://huggingface.co/docs/hub/models-adding-libraries) for you!
huggingface/blog/blob/main/sb3.md
!--⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Git vs HTTP paradigm The `huggingface_hub` library is a library for interacting with the Hugging Face Hub, which is a collections of git-based repositories (models, datasets or Spaces). There are two main ways to access the Hub using `huggingface_hub`. The first approach, the so-called "git-based" approach, is led by the [`Repository`] class. This method uses a wrapper around the `git` command with additional functions specifically designed to interact with the Hub. The second option, called the "HTTP-based" approach, involves making HTTP requests using the [`HfApi`] client. Let's examine the pros and cons of each approach. ## Repository: the historical git-based approach At first, `huggingface_hub` was mostly built around the [`Repository`] class. It provides Python wrappers for common `git` commands such as `"git add"`, `"git commit"`, `"git push"`, `"git tag"`, `"git checkout"`, etc. The library also helps with setting credentials and tracking large files, which are often used in machine learning repositories. Additionally, the library allows you to execute its methods in the background, making it useful for uploading data during training. The main advantage of using a [`Repository`] is that it allows you to maintain a local copy of the entire repository on your machine. This can also be a disadvantage as it requires you to constantly update and maintain this local copy. This is similar to traditional software development where each developer maintains their own local copy and pushes changes when working on a feature. However, in the context of machine learning, this may not always be necessary as users may only need to download weights for inference or convert weights from one format to another without the need to clone the entire repository. <Tip warning={true}> [`Repository`] is now deprecated in favor of the http-based alternatives. Given its large adoption in legacy code, the complete removal of [`Repository`] will only happen in release `v1.0`. </Tip> ## HfApi: a flexible and convenient HTTP client The [`HfApi`] class was developed to provide an alternative to local git repositories, which can be cumbersome to maintain, especially when dealing with large models or datasets. The [`HfApi`] class offers the same functionality as git-based approaches, such as downloading and pushing files and creating branches and tags, but without the need for a local folder that needs to be kept in sync. In addition to the functionalities already provided by `git`, the [`HfApi`] class offers additional features, such as the ability to manage repos, download files using caching for efficient reuse, search the Hub for repos and metadata, access community features such as discussions, PRs, and comments, and configure Spaces hardware and secrets. ## What should I use ? And when ? Overall, the **HTTP-based approach is the recommended way to use** `huggingface_hub` in all cases. [`HfApi`] allows to pull and push changes, work with PRs, tags and branches, interact with discussions and much more. Since the `0.16` release, the http-based methods can also run in the background, which was the last major advantage of the [`Repository`] class. However, not all git commands are available through [`HfApi`]. Some may never be implemented, but we are always trying to improve and close the gap. If you don't see your use case covered, please open [an issue on Github](https://github.com/huggingface/huggingface_hub)! We welcome feedback to help build the 🤗 ecosystem with and for our users. This preference of the http-based [`HfApi`] over the git-based [`Repository`] does not mean that git versioning will disappear from the Hugging Face Hub anytime soon. It will always be possible to use `git` commands locally in workflows where it makes sense.
huggingface/huggingface_hub/blob/main/docs/source/en/concepts/git_vs_http.md
-- title: "Director of Machine Learning Insights [Part 2: SaaS Edition]" thumbnail: /blog/assets/67_ml_director_insights/thumbnail.png authors: - user: britneymuller --- # Director of Machine Learning Insights [Part 2: SaaS Edition] _If you or your team are interested in building ML solutions faster visit [hf.co/support](https://huggingface.co/support?utm_source=article&utm_medium=blog&utm_campaign=ml_director_insights_2) today!_ 👋 Welcome to Part 2 of our Director of Machine Learning Insights [Series]. Check out [Part 1 here.](https://huggingface.co/blog/ml-director-insights) Directors of Machine Learning have a unique seat at the AI table spanning the perspective of various roles and responsibilities. Their rich knowledge of ML frameworks, engineering, architecture, real-world applications and problem-solving provides deep insights into the current state of ML. For example, one director will note how using new transformers speech technology decreased their team’s error rate by 30% and how simple thinking can help save _a lot_ of computational power. Ever wonder what directors at Salesforce or ZoomInfo currently think about the state of Machine Learning? What their biggest challenges are? And what they're most excited about? Well, you're about to find out! In this second SaaS focused installment, you’ll hear from a deep learning for healthcare textbook author who also founded a non-profit for mentoring ML talent, a chess fanatic cybersecurity expert, an entrepreneur whose business was inspired by Barbie’s need to monitor brand reputation after a lead recall, and a seasoned patent and academic paper author who enjoys watching his 4 kids make the same mistakes as his ML models. 🚀 Let’s meet some top Machine Learning Directors in SaaS and hear what they have to say about Machine Learning: <img class="mx-auto" style="float: left;" padding="5px" width="200" src="/blog/assets/67_ml_director_insights/Omar-Rahman.jpeg"></a> ### [Omar Rahman](https://www.linkedin.com/in/omar-rahman-4739713a/) - Director of Machine Learning at [Salesforce](https://www.salesforce.com/) **Background:** Omar leads a team of Machine Learning and Data Engineers in leveraging ML for defensive security purposes as part of the Cybersecurity team. Previously, Omar has led data science and machine learning engineering teams at Adobe and SAP focusing on bringing intelligent capabilities to marketing cloud and procurement applications. Omar holds a Master’s degree in Electrical Engineering from Arizona State University. **Fun Fact:** Omar loves to play chess and volunteers his free time to guide and mentor graduate students in AI. **Salesforce:** World's #1 customer relationship management software. #### **1. How has ML made a positive impact on SaaS?** ML has benefited SaaS offerings in many ways. **a. Improving automation within applications:** For example, a service ticket router using NLP (Natural Language Processing) to understand the context of the service request and routing it to the appropriate team within the organization. **b. Reduction in code complexity:** Rules-based systems tend to get unwieldy as new rules are added, thereby increasing maintenance costs. For example, An ML-based language translation system is more accurate and robust with much fewer lines of code as compared to previous rules-based systems. **c. Better forecasting results in cost savings.** Being able to forecast more accurately helps in reducing backorders in the supply chain as well as cost savings due to a reduction in storage costs. #### **2. What are the biggest ML challenges within SaaS?** a. Productizing ML applications require a lot more than having a model. Being able to leverage the model for serving results, detecting and adapting to changes in statistics of data, etc. creates significant overhead in deploying and maintaining ML systems. b. In most large organizations, data is often siloed and not well maintained resulting in significant time spent in consolidating data, pre-processing, data cleaning activities, etc., thereby resulting in a significant amount of time and effort needed to create ML-based applications. #### **3. What’s a common mistake you see people make trying to integrate ML into SaaS?** Not focussing enough on the business context and the problem being solved, rather trying to use the latest and greatest algorithms and newly open-sourced libraries. A lot can be achieved by simple traditional ML techniques. #### **4. What excites you most about the future of ML?** Generalized artificial intelligence capabilities, if built and managed well, have the capability to transform humanity in more ways than one can imagine. My hope is that we will see great progress in the areas of healthcare and transportation. We already see the benefits of AI in radiology resulting in significant savings in manpower thereby enabling humans to focus on more complex tasks. Self-driving cars and trucks are already transforming the transportation sector. <img class="mx-auto" style="float: left;" padding="5px" width="200" src="/blog/assets/67_ml_director_insights/Cao-Danica-Xiao.jpeg"></a> ### [Cao (Danica) Xiao](https://www.linkedin.com/in/caoxiao/) - Senior Director of Machine Learning at [Amplitude](https://amplitude.com/) **Background:** Cao (Danica) Xiao is the Senior Director and Head of Data Science and Machine Learning at Amplitude. Her team focuses on developing and deploying self-serving machine learning models and products based on multi-sourced user data to solve critical business challenges regarding digital production analytics and optimization. Besides, she is a passionate machine learning researcher with over 95+ papers published in leading CS venues. She is also a technology leader with extensive experience in machine learning roadmap creation, team building, and mentoring. Prior to Amplitude, Cao (Danica) was the Global Head of Machine Learning in the Analytics Center of Excellence of IQVIA. Before that, she was a research staff member at IBM Research and research lead at MIT-IBM Watson AI Lab. She got her Ph.D. degree in machine learning from the University of Washington, Seattle. Recently, she also co-authored a textbook on deep learning for healthcare and founded a non-profit organization for mentoring machine learning talents. **Fun Fact:** Cao is a cat-lover and is a mom to two cats: one Singapura girl and one British shorthair boy. **Amplitude:** A cloud-based product-analytics platform that helps customers build better products. #### **1. How has ML made a positive impact on SaaS?** ML plays a game-changing role in turning massive noisy machine-generated or user-generated data into answers to all kinds of business questions including personalization, prediction, recommendation, etc. It impacts a wide spectrum of industry verticals via SaaS. #### **2. What are the biggest ML challenges within SaaS?** Lack of data for ML model training that covers a broader range of industry use cases. While being a general solution for all industry verticals, still need to figure out how to handle the vertical-specific needs arising from business, or domain shift issue that affects ML model quality. #### **3. What’s a common mistake you see people make trying to integrate ML into a SaaS product?** Not giving users the flexibility to incorporate their business knowledge or other human factors that are critical to business success. For example, for a self-serve product recommendation, it would be great if users could control the diversity of recommended products. #### **4. What excites you most about the future of ML?** ML has seen tremendous success. It also evolves rapidly to address the current limitations (e.g., lack of data, domain shift, incorporation of domain knowledge). More ML technologies will be applied to solve business or customer needs. For example, interpretable ML for users to understand and trust the ML model outputs; counterfactual prediction for users to estimate the alternative outcome should they make a different business decision. <img class="mx-auto" style="float: left;" padding="5px" width="200" src="/blog/assets/67_ml_director_insights/Raphael-Cohen.jpeg"></a> ### [Raphael Cohen](https://www.linkedin.com/in/raphael-cohen-63a87779/) - Director of the Machine Learning at [ZoomInfo](https://www.zoominfo.com/) **Background:** Raphael has a Ph.D. in the field of understanding health records and genetics, has authored 20 academic papers and has 8 patents. Raphael is also a leader in Data Science and Research with a background in NLP, Speech, healthcare, sales, customer journeys, and IT. **Fun Fact:** Raphael has 4 kids and enjoys seeing them learn and make the same mistakes as some of his ML models. **ZoomInfo:** Intelligent sales and marketing technology backed by the world's most comprehensive business database. #### **1. How has ML made a positive impact on SaaS** Machine Learning has facilitated the transcription of conversational data to help people unlock new insights and understandings. People can now easily view the things they talked about, summarized goals, takeaways, who spoke the most, who asked the best questions, what the next steps are, and more. This is incredibly useful for many interactions like email and video conferencing (which are more common now than ever). With [Chorus.ai](chorus.ai) we transcribe conversations as they are being recorded in real-time. We use an algorithm called [Wave2Vec](https://arxiv.org/abs/1904.05862) to do this. 🤗 [Hugging Face recently released their own Wave2Vec](https://huggingface.co/docs/transformers/model_doc/wav2vec2) version created for training that we derived a lot of value from. This new generation of transformers speech technology is incredibly powerful, it has decreased our error rate by 30%. Once we transcribe a conversation we can look into the content - this is where NLP comes in and we rely heavily on [Hugging Face Transformers](https://huggingface.co/docs/transformers/index) to allow us to depict around 20 categories of topics inside recordings and emails; for example, are we talking about pricing, signing a contract, next steps, all of these topics are sent through email or discussed and it’s easy to now extract that info without having to go back through all of your conversations. This helps make people much better at their jobs. #### **2. What are the biggest ML challenges within SaaS?** The biggest challenge is understanding when to make use of ML. What problems can we solve with ML and which shouldn’t we? A lot of times we have a breakthrough with an ML model but a computationally lighter heuristic model is better suited to solve the problem we have. This is where a strong AI strategy comes into play. —Understand how you want your final product to work and at what efficiency. We also have the question of how to get the ML models you’ve built into production with a low environmental/computational footprint? Everyone is struggling with this; how to keep models in production in an efficient way without burning too many resources. A great example of this was when we moved to the Wav2Vec framework, which required us to break down our conversational audio into 15sec segments that get fed into this huge model. During this, we discovered that we were feeding the model a lot of segments that were pure silence. This is common when someone doesn’t show up or one person is waiting for another to join a meeting. Just by adding another very light model to tell us when not to send the silent segments into this big complicated ML model, we are able to save a lot of computational power/energy. This is an example of where engineers can think of other easier ways to speed up and save on model production. There’s an opportunity for more engineers to be savvier and better optimize models without burning too many resources. #### **3. What’s a common mistake you see people make trying to integrate ML into SaaS?** Is my solution the smartest solution? Is there a better way to break this down and solve it more efficiently? When we started identifying speakers we went directly with an ML method and this wasn’t as accurate as the video conference provider data. Since then we learned that the best way to do this is to start with the metadata of who speaks from the conference provider and then overlay that with a smart embedding model. We lost precious time during this learning curve. We shouldn’t have used this large ML solution if we stopped to understand there are other data sources we should invest in that will help us accelerate more efficiently. Think outside the box and don’t just take something someone built and think I have an idea of how to make this better. Where can we be smarter by understanding the problem better? #### **4. What excites you most about the future of ML?** I think we are in the middle of another revolution. For us, seeing our error rates drop by 30% by our Wave2Vec model was amazing. We had been working for years only getting 1% drops at each time and then within 3 months' time we saw such a huge improvement and we know that’s only the beginning. In academia, bigger and smarter things are happening. These pre-trained models are allowing us to do things we could never imagine before. This is very exciting! We are also seeing a lot of tech from NLP entering other domains like speech and vision and being able to power them. Another thing I’m really excited about is generating models! We recently worked with a company called [Bria.ai](https://bria.ai/) and they use these amazing GANs to create images. So you take a stock photo and you can turn it into a different photo by saying “remove glasses”, “add glasses” or “add hair” and it does so perfectly. The idea is that we can use this to generate data. We can take images of people from meetings not smiling and we can make them smile in order to build a data set for smile detection. This will be transformative. You can take 1 image and turn it into 100 images. This will also apply to speech generation which could be a powerful application within the service industry. #### **Any final thoughts?** –It’s challenging to put models into production. Believe data science teams need engineering embedded with them. Engineers should be part of the AI team. This will be an important structural pivot in the future. <img class="mx-auto" style="float: left;" padding="5px" width="200" src="/blog/assets/67_ml_director_insights/Martin-Ostrovsky.jpeg"></a> ### [Martin Ostrovsky](https://www.linkedin.com/in/martinostrovsky/) Founder/CEO & Machine Learning Director at [Repustate Inc.](https://www.repustate.com/) **Background:** Martin is passionate about AI, ML, and NLP and is responsible for guiding the strategy and success of all Repustate products by leading the cross-functional team responsible for developing and improving them. He sets the strategy, roadmap, and feature definition for Repustate’s Global Text Analytics API, Sentiment Analysis, Deep Search, and Named Entity Recognition solutions. He has a Bachelor's degree in Computer Science from York University and earned his Master of Business Administration from the Schulich School of Business. **Fun Fact:** The first application of ML I used was for Barbie toys. My professor at Schulich Business School mentioned that Barbie needed to monitor their brand reputation due to a recall of the toys over concerns of excessive lead in them. Hiring people to manually go through each social post and online article seemed just so inefficient and ineffective to me. So I proposed to create a machine learning algorithm that would monitor what people thought of them from across all social media and online channels. The algorithm worked seamlessly. And that’s how I decided to name my company, Repustate - the “state” of your “repu”tation. 🤖 **Repustate:** A leading provider of text analytics services for enterprise companies. #### **1. Favorite ML business application?** My favorite ML application is cybersecurity. Cybersecurity remains the most critical part for any company (government or non-government) with regard to data. Machine Learning helps identify cyber threats, fight cyber-crime, including cyberbullying, and allows for a faster response to security breaches. ML algorithms quickly analyze the most likely vulnerabilities and potential malware and spyware applications based on user data. They can spot distortion in endpoint entry patterns and identify it as a potential data breach. #### **2. What is your biggest ML challenge?** The biggest ML challenge is audio to text transcription in the Arabic Language. There are quite a few systems that can decipher Arabic but they lack accuracy. Arabic is the official language of 26 countries and has 247 million native speakers and 29 million non-native speakers. It is a complex language with a rich vocabulary and many dialects. The sentiment mining tool needs to read data directly in Arabic if you want accurate insights from Arabic text because otherwise nuances are lost in translations. Translating text to English or any other language can completely change the meaning of words in Arabic, including even the root word. That’s why the algorithm needs to be trained on Arabic datasets and use a dedicated Arabic part-of-speech tagger. Because of these challenges, most companies fail to provide accurate Arabic audio to text translation to date. #### **3. What’s a common mistake you see people make trying to integrate ML?** The most common mistake that companies make while trying to integrate ML is insufficient data in their training datasets. Most ML models cannot distinguish between good data and insufficient data. Therefore, training datasets are considered relevant and used as a precedent to determine the results in most cases. This challenge isn’t limited to small- or medium-sized businesses; large enterprises have the same challenge. No matter what the ML processes are, companies need to ensure that the training datasets are reliable and exhaustive for their desired outcome by incorporating a human element into the early stages of machine learning. However, companies can create the required foundation for successful machine learning projects with a thorough review of accurate, comprehensive, and constant training data. #### **4. Where do you see ML having the biggest impact in the next 5-10 years?** In the next 5-10 years, ML will have the biggest impact on transforming the healthcare sector. **Networked hospitals and connected care:** With predictive care, command centers are all set to analyze clinical and location data to monitor supply and demand across healthcare networks in real-time. With ML, healthcare professionals will be able to spot high-risk patients more quickly and efficiently, thus removing bottlenecks in the system. You can check the spread of contractible diseases faster, take better measures to manage epidemics, identify at-risk patients more accurately, especially for genetic diseases, and more. **Better staff and patient experiences:** Predictive healthcare networks are expected to reduce wait times, improve staff workflows, and take on the ever-growing administrative burden. By learning from every patient, diagnosis, and procedure, ML is expected to create experiences that adapt to hospital staff as well as the patient. This improves health outcomes and reduces clinician shortages and burnout while enabling the system to be financially sustainable. --- 🤗 Thank you for joining us in this second installment of ML Director Insights. Stay tuned for more insights from ML Directors in Finance, Healthcare and e-Commerce. Big thanks to Omar Rahman, Cao (Danica) Xiao, Raphael Cohen, and Martin Ostrovsky for their brilliant insights and participation in this piece. We look forward to watching each of your continued successes and will be cheering you on each step of the way. 🎉 If you or your team are interested in accelerating your ML roadmap with Hugging Face Experts please visit [hf.co/support](https://huggingface.co/support?utm_source=article&utm_medium=blog&utm_campaign=ml_director_insights_2) to learn more.
huggingface/blog/blob/main/ml-director-insights-2.md
-- title: "From GPT2 to Stable Diffusion: Hugging Face arrives to the Elixir community" thumbnail: /blog/assets/120_elixir-bumblebee/thumbnail.png authors: - user: josevalim guest: true --- # From GPT2 to Stable Diffusion: Hugging Face arrives to the Elixir community The [Elixir](https://elixir-lang.org/) community is glad to announce the arrival of several Neural Networks models, from GPT2 to Stable Diffusion, to Elixir. This is possible thanks to the [just announced Bumblebee library](https://news.livebook.dev/announcing-bumblebee-gpt2-stable-diffusion-and-more-in-elixir-3Op73O), which is an implementation of Hugging Face Transformers in pure Elixir. To help anyone get started with those models, the team behind [Livebook](https://livebook.dev/) - a computational notebook platform for Elixir - created a collection of "Smart cells" that allows developers to scaffold different Neural Network tasks in only 3 clicks. You can watch my video announcement to learn more: <iframe width="100%" style="aspect-ratio: 16 / 9;"src="https://www.youtube.com/embed/g3oyh3g1AtQ" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> Thanks to the concurrency and distribution support in the Erlang Virtual Machine, which Elixir runs on, developers can embed and serve these models as part of their existing [Phoenix web applications](https://phoenixframework.org/), integrate into their [data processing pipelines with Broadway](https://elixir-broadway.org), and deploy them alongside their [Nerves embedded systems](https://www.nerves-project.org/) - without a need for 3rd-party dependencies. In all scenarios, Bumblebee models compile to both CPU and GPU. ## Background The efforts to bring Machine Learning to Elixir started almost 2 years ago with [the Numerical Elixir (Nx) project](https://github.com/elixir-nx/nx/tree/main/nx). The Nx project implements multi-dimensional tensors alongside "numerical definitions", a subset of Elixir which can be compiled to the CPU/GPU. Instead of reinventing the wheel, Nx uses bindings for Google XLA ([EXLA](https://github.com/elixir-nx/nx/tree/main/exla)) and Libtorch ([Torchx](https://github.com/elixir-nx/nx/tree/main/torchx)) for CPU/GPU compilation. Several other projects were born from the Nx initiative. [Axon](https://github.com/elixir-nx/axon) brings functional composable Neural Networks to Elixir, taking inspiration from projects such as [Flax](https://github.com/google/flax) and [PyTorch Ignite](https://pytorch.org/ignite/index.html). The [Explorer](https://github.com/elixir-nx/explorer) project borrows from [dplyr](https://dplyr.tidyverse.org/) and [Rust's Polars](https://www.pola.rs/) to provide expressive and performant dataframes to the Elixir community. [Bumblebee](https://github.com/elixir-nx/bumblebee) and [Tokenizers](https://github.com/elixir-nx/tokenizers) are our most recent releases. We are thankful to Hugging Face for enabling collaborative Machine Learning across communities and tools, which played an essential role in bringing the Elixir ecosystem up to speed. Next, we plan to focus on training and transfer learning of Neural Networks in Elixir, allowing developers to augment and specialize pre-trained models according to the needs of their businesses and applications. We also hope to publish more on our development of traditional Machine Learning algorithms. ## Your turn If you want to give Bumblebee a try, you can: * Download [Livebook v0.8](https://livebook.dev/) and automatically generate "Neural Networks tasks" from the "+ Smart" cell menu inside your notebooks. We are currently working on running Livebook on additional platforms and _Spaces_ (stay tuned! 😉). * We have also written [single-file Phoenix applications](https://github.com/elixir-nx/bumblebee/tree/main/examples/phoenix) as examples of Bumblebee models inside your Phoenix (+ LiveView) apps. Those should provide the necessary building blocks to integrate them as part of your production app. * For a more hands on approach, read some of our [notebooks](https://github.com/elixir-nx/bumblebee/tree/main/notebooks). If you want to help us build the Machine Learning ecosystem for Elixir, check out the projects above, and give them a try. There are many interesting areas, from compiler development to model building. For instance, pull requests that bring more models and architectures to Bumblebee are certainly welcome. The future is concurrent, distributed, and fun!
huggingface/blog/blob/main/elixir-bumblebee.md
PNASNet **Progressive Neural Architecture Search**, or **PNAS**, is a method for learning the structure of convolutional neural networks (CNNs). It uses a sequential model-based optimization (SMBO) strategy, where we search the space of cell structures, starting with simple (shallow) models and progressing to complex ones, pruning out unpromising structures as we go. ## How do I use this model on an image? To load a pretrained model: ```py >>> import timm >>> model = timm.create_model('pnasnet5large', pretrained=True) >>> model.eval() ``` To load and preprocess the image: ```py >>> import urllib >>> from PIL import Image >>> from timm.data import resolve_data_config >>> from timm.data.transforms_factory import create_transform >>> config = resolve_data_config({}, model=model) >>> transform = create_transform(**config) >>> url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg") >>> urllib.request.urlretrieve(url, filename) >>> img = Image.open(filename).convert('RGB') >>> tensor = transform(img).unsqueeze(0) # transform and add batch dimension ``` To get the model predictions: ```py >>> import torch >>> with torch.no_grad(): ... out = model(tensor) >>> probabilities = torch.nn.functional.softmax(out[0], dim=0) >>> print(probabilities.shape) >>> # prints: torch.Size([1000]) ``` To get the top-5 predictions class names: ```py >>> # Get imagenet class mappings >>> url, filename = ("https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt", "imagenet_classes.txt") >>> urllib.request.urlretrieve(url, filename) >>> with open("imagenet_classes.txt", "r") as f: ... categories = [s.strip() for s in f.readlines()] >>> # Print top categories per image >>> top5_prob, top5_catid = torch.topk(probabilities, 5) >>> for i in range(top5_prob.size(0)): ... print(categories[top5_catid[i]], top5_prob[i].item()) >>> # prints class names and probabilities like: >>> # [('Samoyed', 0.6425196528434753), ('Pomeranian', 0.04062102362513542), ('keeshond', 0.03186424449086189), ('white wolf', 0.01739676296710968), ('Eskimo dog', 0.011717947199940681)] ``` Replace the model name with the variant you want to use, e.g. `pnasnet5large`. You can find the IDs in the model summaries at the top of this page. To extract image features with this model, follow the [timm feature extraction examples](../feature_extraction), just change the name of the model you want to use. ## How do I finetune this model? You can finetune any of the pre-trained models just by changing the classifier (the last layer). ```py >>> model = timm.create_model('pnasnet5large', pretrained=True, num_classes=NUM_FINETUNE_CLASSES) ``` To finetune on your own dataset, you have to write a training loop or adapt [timm's training script](https://github.com/rwightman/pytorch-image-models/blob/master/train.py) to use your dataset. ## How do I train this model? You can follow the [timm recipe scripts](../scripts) for training a new model afresh. ## Citation ```BibTeX @misc{liu2018progressive, title={Progressive Neural Architecture Search}, author={Chenxi Liu and Barret Zoph and Maxim Neumann and Jonathon Shlens and Wei Hua and Li-Jia Li and Li Fei-Fei and Alan Yuille and Jonathan Huang and Kevin Murphy}, year={2018}, eprint={1712.00559}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` <!-- Type: model-index Collections: - Name: PNASNet Paper: Title: Progressive Neural Architecture Search URL: https://paperswithcode.com/paper/progressive-neural-architecture-search Models: - Name: pnasnet5large In Collection: PNASNet Metadata: FLOPs: 31458865950 Parameters: 86060000 File Size: 345153926 Architecture: - Average Pooling - Batch Normalization - Convolution - Depthwise Separable Convolution - Dropout - ReLU Tasks: - Image Classification Training Techniques: - Label Smoothing - RMSProp - Weight Decay Training Data: - ImageNet Training Resources: 100x NVIDIA P100 GPUs ID: pnasnet5large LR: 0.015 Dropout: 0.5 Crop Pct: '0.911' Momentum: 0.9 Batch Size: 1600 Image Size: '331' Interpolation: bicubic Label Smoothing: 0.1 Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/pnasnet.py#L343 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-cadene/pnasnet5large-bf079911.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 0.98% Top 5 Accuracy: 18.58% -->
huggingface/pytorch-image-models/blob/main/hfdocs/source/models/pnasnet.mdx
Repositories Models, Spaces, and Datasets are hosted on the Hugging Face Hub as [Git repositories](https://git-scm.com/about), which means that version control and collaboration are core elements of the Hub. In a nutshell, a repository (also known as a **repo**) is a place where code and assets can be stored to back up your work, share it with the community, and work in a team. In these pages, you will go over the basics of getting started with Git and interacting with repositories on the Hub. Once you get the hang of it, you can explore the best practices and next steps that we've compiled for effective repository usage. ## Contents - [Getting Started with Repositories](./repositories-getting-started) - [Settings](./repositories-settings) - [Pull Requests & Discussions](./repositories-pull-requests-discussions) - [Pull Requests advanced usage](./repositories-pull-requests-discussions#pull-requests-advanced-usage) - [Webhooks](./webhooks) - [Notifications](./notifications) - [Collections](./collections) - [Repository size recommendations](./repositories-recommendations) - [Next Steps](./repositories-next-steps) - [Licenses](./repositories-licenses)
huggingface/hub-docs/blob/main/docs/hub/repositories.md
Model Based Reinforcement Learning (MBRL) Model-based reinforcement learning only differs from its model-free counterpart in learning a *dynamics model*, but that has substantial downstream effects on how the decisions are made. The dynamics model usually models the environment transition dynamics, \\( s_{t+1} = f_\theta (s_t, a_t) \\), but things like inverse dynamics models (mapping from states to actions) or reward models (predicting rewards) can be used in this framework. ## Simple definition - There is an agent that repeatedly tries to solve a problem, **accumulating state and action data**. - With that data, the agent creates a structured learning tool, *a dynamics model*, to reason about the world. - With the dynamics model, the agent **decides how to act by predicting the future**. - With those actions, **the agent collects more data, improves said model, and hopefully improves future actions**. ## Academic definition Model-based reinforcement learning (MBRL) follows the framework of an agent interacting in an environment, **learning a model of said environment**, and then **leveraging the model for control (making decisions). Specifically, the agent acts in a Markov Decision Process (MDP) governed by a transition function \\( s_{t+1} = f (s_t , a_t) \\) and returns a reward at each step \\( r(s_t, a_t) \\). With a collected dataset \\( D :={ s_i, a_i, s_{i+1}, r_i} \\), the agent learns a model, \\( s_{t+1} = f_\theta (s_t , a_t) \\) **to minimize the negative log-likelihood of the transitions**. We employ sample-based model-predictive control (MPC) using the learned dynamics model, which optimizes the expected reward over a finite, recursively predicted horizon, \\( \tau \\), from a set of actions sampled from a uniform distribution \\( U(a) \\), (see [paper](https://arxiv.org/pdf/2002.04523) or [paper](https://arxiv.org/pdf/2012.09156.pdf) or [paper](https://arxiv.org/pdf/2009.01221.pdf)). ## Further reading For more information on MBRL, we recommend you check out the following resources: - A [blog post on debugging MBRL](https://www.natolambert.com/writing/debugging-mbrl). - A [recent review paper on MBRL](https://arxiv.org/abs/2006.16712), ## Author This section was written by <a href="https://twitter.com/natolambert"> Nathan Lambert </a>
huggingface/deep-rl-class/blob/main/units/en/unitbonus3/model-based.mdx
Structure your repository To host and share your dataset, create a dataset repository on the Hugging Face Hub and upload your data files. This guide will show you how to structure your dataset repository when you upload it. A dataset with a supported structure and file format (`.txt`, `.csv`, `.parquet`, `.jsonl`, `.mp3`, `.jpg`, `.zip` etc.) are loaded automatically with [`~datasets.load_dataset`], and it'll have a dataset viewer on its dataset page on the Hub. ## Main use-case The simplest dataset structure has two files: `train.csv` and `test.csv` (this works with any supported file format). Your repository will also contain a `README.md` file, the [dataset card](dataset_card) displayed on your dataset page. ``` my_dataset_repository/ ├── README.md ├── train.csv └── test.csv ``` In this simple case, you'll get a dataset with two splits: `train` (containing examples from `train.csv`) and `test` (containing examples from `test.csv`). ## Define your splits and subsets in YAML ## Splits If you have multiple files and want to define which file goes into which split, you can use the YAML `configs` field at the top of your README.md. For example, given a repository like this one: ``` my_dataset_repository/ ├── README.md ├── data.csv └── holdout.csv ``` You can define your splits by adding the `configs` field in the YAML block at the top of your README.md: ```yaml --- configs: - config_name: default data_files: - split: train path: "data.csv" - split: test path: "holdout.csv" --- ``` You can select multiple files per split using a list of paths: ``` my_dataset_repository/ ├── README.md ├── data/ │ ├── abc.csv │ └── def.csv └── holdout/ └── ghi.csv ``` ```yaml --- configs: - config_name: default data_files: - split: train path: - "data/abc.csv" - "data/def.csv" - split: test path: "holdout/ghi.csv" --- ``` Or you can use glob patterns to automatically list all the files you need: ```yaml --- configs: - config_name: default data_files: - split: train path: "data/*.csv" - split: test path: "holdout/*.csv" --- ``` <Tip warning={true}> Note that `config_name` field is required even if you have a single configuration. </Tip> ## Configurations Your dataset might have several subsets of data that you want to be able to load separately. In that case you can define a list of configurations inside the `configs` field in YAML: ``` my_dataset_repository/ ├── README.md ├── main_data.csv └── additional_data.csv ``` ```yaml --- configs: - config_name: main_data data_files: "main_data.csv" - config_name: additional_data data_files: "additional_data.csv" --- ``` Each configuration is shown separately on the Hugging Face Hub, and can be loaded by passing its name as a second parameter: ```python from datasets import load_dataset main_data = load_dataset("my_dataset_repository", "main_data") additional_data = load_dataset("my_dataset_repository", "additional_data") ``` ## Builder parameters Not only `data_files`, but other builder-specific parameters can be passed via YAML, allowing for more flexibility on how to load the data while not requiring any custom code. For example, define which separator to use in which configuration to load your `csv` files: ```yaml --- configs: - config_name: tab data_files: "main_data.csv" sep: "\t" - config_name: comma data_files: "additional_data.csv" sep: "," --- ``` Refer to [specific builders' documentation](./package_reference/builder_classes) to see what configuration parameters they have. <Tip> You can set a default configuration using `default: true`, e.g. you can run `main_data = load_dataset("my_dataset_repository")` if you set ```yaml - config_name: main_data data_files: "main_data.csv" default: true ``` </Tip> ## Automatic splits detection If no YAML is provided, 🤗 Datasets searches for certain patterns in the dataset repository to automatically infer the dataset splits. There is an order to the patterns, beginning with the custom filename split format to treating all files as a single split if no pattern is found. ### Directory name Your data files may also be placed into different directories named `train`, `test`, and `validation` where each directory contains the data files for that split: ``` my_dataset_repository/ ├── README.md └── data/ ├── train/ │ └── bees.csv ├── test/ │ └── more_bees.csv └── validation/ └── even_more_bees.csv ``` ### Filename splits If you don't have any non-traditional splits, then you can place the split name anywhere in the data file and it is automatically inferred. The only rule is that the split name must be delimited by non-word characters, like `test-file.csv` for example instead of `testfile.csv`. Supported delimiters include underscores, dashes, spaces, dots, and numbers. For example, the following file names are all acceptable: - train split: `train.csv`, `my_train_file.csv`, `train1.csv` - validation split: `validation.csv`, `my_validation_file.csv`, `validation1.csv` - test split: `test.csv`, `my_test_file.csv`, `test1.csv` Here is an example where all the files are placed into a directory named `data`: ``` my_dataset_repository/ ├── README.md └── data/ ├── train.csv ├── test.csv └── validation.csv ``` ### Custom filename split If your dataset splits have custom names that aren't `train`, `test`, or `validation`, then you can name your data files like `data/<split_name>-xxxxx-of-xxxxx.csv`. Here is an example with three splits, `train`, `test`, and `random`: ``` my_dataset_repository/ ├── README.md └── data/ ├── train-00000-of-00003.csv ├── train-00001-of-00003.csv ├── train-00002-of-00003.csv ├── test-00000-of-00001.csv ├── random-00000-of-00003.csv ├── random-00001-of-00003.csv └── random-00002-of-00003.csv ``` ### Single split When 🤗 Datasets can't find any of the above patterns, then it'll treat all the files as a single train split. If your dataset splits aren't loading as expected, it may be due to an incorrect pattern. ### Split name keywords There are several ways to name splits. Validation splits are sometimes called "dev", and test splits may be referred to as "eval". These other split names are also supported, and the following keywords are equivalent: - train, training - validation, valid, val, dev - test, testing, eval, evaluation The structure below is a valid repository: ``` my_dataset_repository/ ├── README.md └── data/ ├── training.csv ├── eval.csv └── valid.csv ``` ### Multiple files per split If one of your splits comprises several files, 🤗 Datasets can still infer whether it is the train, validation, and test split from the file name. For example, if your train and test splits span several files: ``` my_dataset_repository/ ├── README.md ├── train_0.csv ├── train_1.csv ├── train_2.csv ├── train_3.csv ├── test_0.csv └── test_1.csv ``` Make sure all the files of your `train` set have *train* in their names (same for test and validation). Even if you add a prefix or suffix to `train` in the file name (like `my_train_file_00001.csv` for example), 🤗 Datasets can still infer the appropriate split. For convenience, you can also place your data files into different directories. In this case, the split name is inferred from the directory name. ``` my_dataset_repository/ ├── README.md └── data/ ├── train/ │ ├── shard_0.csv │ ├── shard_1.csv │ ├── shard_2.csv │ └── shard_3.csv └── test/ ├── shard_0.csv └── shard_1.csv ``` For more flexibility over how to load and generate a dataset, you can also write a [dataset loading script](./dataset_script).
huggingface/datasets/blob/main/docs/source/repository_structure.mdx
!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Time Series Utilities This page lists all the utility functions and classes that can be used for Time Series based models. Most of those are only useful if you are studying the code of the time series models or you wish to add to the collection of distributional output classes. ## Distributional Output [[autodoc]] time_series_utils.NormalOutput [[autodoc]] time_series_utils.StudentTOutput [[autodoc]] time_series_utils.NegativeBinomialOutput
huggingface/transformers/blob/main/docs/source/en/internal/time_series_utils.md
!--- Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # XTREME-S benchmark examples *Maintainers: [Anton Lozhkov](https://github.com/anton-l) and [Patrick von Platen](https://github.com/patrickvonplaten)* The Cross-lingual TRansfer Evaluation of Multilingual Encoders for Speech (XTREME-S) benchmark is a benchmark designed to evaluate speech representations across languages, tasks, domains and data regimes. It covers XX typologically diverse languages and seven downstream tasks grouped in four families: speech recognition, translation, classification and retrieval. XTREME-S covers speech recognition with Fleurs, Multilingual LibriSpeech (MLS) and VoxPopuli, speech translation with CoVoST-2, speech classification with LangID (Fleurs) and intent classification (MInds-14) and finally speech(-text) retrieval with Fleurs. Each of the tasks covers a subset of the 102 languages included in XTREME-S (shown here with their ISO 3166-1 codes): afr, amh, ara, asm, ast, azj, bel, ben, bos, cat, ceb, ces, cmn, cym, dan, deu, ell, eng, spa, est, fas, ful, fin, tgl, fra, gle, glg, guj, hau, heb, hin, hrv, hun, hye, ind, ibo, isl, ita, jpn, jav, kat, kam, kea, kaz, khm, kan, kor, ckb, kir, ltz, lug, lin, lao, lit, luo, lav, mri, mkd, mal, mon, mar, msa, mlt, mya, nob, npi, nld, nso, nya, oci, orm, ory, pan, pol, pus, por, ron, rus, bul, snd, slk, slv, sna, som, srp, swe, swh, tam, tel, tgk, tha, tur, ukr, umb, urd, uzb, vie, wol, xho, yor, yue and zul. Paper: [XTREME-S: Evaluating Cross-lingual Speech Representations](https://arxiv.org/abs/2203.10752) Dataset: [https://huggingface.co/datasets/google/xtreme_s](https://huggingface.co/datasets/google/xtreme_s) ## Fine-tuning for the XTREME-S tasks Based on the [`run_xtreme_s.py`](https://github.com/huggingface/transformers/blob/main/examples/research_projects/xtreme-s/run_xtreme_s.py) script. This script can fine-tune any of the pretrained speech models on the [hub](https://huggingface.co/models?pipeline_tag=automatic-speech-recognition) on the [XTREME-S dataset](https://huggingface.co/datasets/google/xtreme_s) tasks. XTREME-S is made up of 7 different tasks. Here is how to run the script on each of them: ```bash export TASK_NAME=mls.all python run_xtreme_s.py \ --model_name_or_path="facebook/wav2vec2-xls-r-300m" \ --task="${TASK_NAME}" \ --output_dir="xtreme_s_xlsr_${TASK_NAME}" \ --num_train_epochs=100 \ --per_device_train_batch_size=32 \ --learning_rate="3e-4" \ --target_column_name="transcription" \ --save_steps=500 \ --eval_steps=500 \ --gradient_checkpointing \ --fp16 \ --group_by_length \ --do_train \ --do_eval \ --do_predict \ --push_to_hub ``` where `TASK_NAME` can be one of: `mls, voxpopuli, covost2, fleurs-asr, fleurs-lang_id, minds14`. We get the following results on the test set of the benchmark's datasets. The corresponding training commands for each dataset are given in the sections below: | Task | Dataset | Result | Fine-tuned model & logs | Training time | GPUs | |-----------------------|-----------|-----------------------|--------------------------------------------------------------------|---------------|--------| | Speech Recognition | MLS | 30.33 WER | [here](https://huggingface.co/anton-l/xtreme_s_xlsr_300m_mls/) | 18:47:25 | 8xV100 | | Speech Recognition | VoxPopuli | - | - | - | - | | Speech Recognition | FLEURS | - | - | - | - | | Speech Translation | CoVoST-2 | - | - | - | - | | Speech Classification | Minds-14 | 90.15 F1 / 90.33 Acc. | [here](https://huggingface.co/anton-l/xtreme_s_xlsr_300m_minds14/) | 2:54:21 | 2xA100 | | Speech Classification | FLEURS | - | - | - | - | | Speech Retrieval | FLEURS | - | - | - | - | ### Speech Recognition with MLS The following command shows how to fine-tune the [XLS-R](https://huggingface.co/docs/transformers/main/model_doc/xls_r) model on [XTREME-S MLS](https://huggingface.co/datasets/google/xtreme_s#multilingual-librispeech-mls) using 8 GPUs in half-precision. ```bash python -m torch.distributed.launch \ --nproc_per_node=8 \ run_xtreme_s.py \ --task="mls" \ --language="all" \ --model_name_or_path="facebook/wav2vec2-xls-r-300m" \ --output_dir="xtreme_s_xlsr_300m_mls" \ --overwrite_output_dir \ --num_train_epochs=100 \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=1 \ --gradient_accumulation_steps=2 \ --learning_rate="3e-4" \ --warmup_steps=3000 \ --evaluation_strategy="steps" \ --max_duration_in_seconds=20 \ --save_steps=500 \ --eval_steps=500 \ --logging_steps=1 \ --layerdrop=0.0 \ --mask_time_prob=0.3 \ --mask_time_length=10 \ --mask_feature_prob=0.1 \ --mask_feature_length=64 \ --freeze_feature_encoder \ --gradient_checkpointing \ --fp16 \ --group_by_length \ --do_train \ --do_eval \ --do_predict \ --metric_for_best_model="wer" \ --greater_is_better=False \ --load_best_model_at_end \ --push_to_hub ``` On 8 V100 GPUs, this script should run in ~19 hours and yield a cross-entropy loss of **0.6215** and word error rate of **30.33** ### Speech Classification with Minds-14 The following command shows how to fine-tune the [XLS-R](https://huggingface.co/docs/transformers/main/model_doc/xls_r) model on [XTREME-S MLS](https://huggingface.co/datasets/google/xtreme_s#intent-classification---minds-14) using 2 GPUs in half-precision. ```bash python -m torch.distributed.launch \ --nproc_per_node=2 \ run_xtreme_s.py \ --task="minds14" \ --language="all" \ --model_name_or_path="facebook/wav2vec2-xls-r-300m" \ --output_dir="xtreme_s_xlsr_300m_minds14" \ --overwrite_output_dir \ --num_train_epochs=50 \ --per_device_train_batch_size=32 \ --per_device_eval_batch_size=8 \ --gradient_accumulation_steps=1 \ --learning_rate="3e-4" \ --warmup_steps=1500 \ --evaluation_strategy="steps" \ --max_duration_in_seconds=30 \ --save_steps=200 \ --eval_steps=200 \ --logging_steps=1 \ --layerdrop=0.0 \ --mask_time_prob=0.3 \ --mask_time_length=10 \ --mask_feature_prob=0.1 \ --mask_feature_length=64 \ --freeze_feature_encoder \ --gradient_checkpointing \ --fp16 \ --group_by_length \ --do_train \ --do_eval \ --do_predict \ --metric_for_best_model="f1" \ --greater_is_better=True \ --load_best_model_at_end \ --push_to_hub ``` On 2 A100 GPUs, this script should run in ~5 hours and yield a cross-entropy loss of **0.4119** and F1 score of **90.15**
huggingface/transformers/blob/main/examples/research_projects/xtreme-s/README.md
Gradio Demo: progress ``` !pip install -q gradio tqdm datasets ``` ``` import gradio as gr import random import time import tqdm from datasets import load_dataset import shutil from uuid import uuid4 with gr.Blocks() as demo: with gr.Row(): text = gr.Textbox() textb = gr.Textbox() with gr.Row(): load_set_btn = gr.Button("Load Set") load_nested_set_btn = gr.Button("Load Nested Set") load_random_btn = gr.Button("Load Random") clean_imgs_btn = gr.Button("Clean Images") wait_btn = gr.Button("Wait") do_all_btn = gr.Button("Do All") track_tqdm_btn = gr.Button("Bind TQDM") bind_internal_tqdm_btn = gr.Button("Bind Internal TQDM") text2 = gr.Textbox() # track list def load_set(text, text2, progress=gr.Progress()): imgs = [None] * 24 for img in progress.tqdm(imgs, desc="Loading from list"): time.sleep(0.1) return "done" load_set_btn.click(load_set, [text, textb], text2) # track nested list def load_nested_set(text, text2, progress=gr.Progress()): imgs = [[None] * 8] * 3 for img_set in progress.tqdm(imgs, desc="Nested list"): time.sleep(2) for img in progress.tqdm(img_set, desc="inner list"): time.sleep(0.1) return "done" load_nested_set_btn.click(load_nested_set, [text, textb], text2) # track iterable of unknown length def load_random(data, progress=gr.Progress()): def yielder(): for i in range(0, random.randint(15, 20)): time.sleep(0.1) yield None for img in progress.tqdm(yielder()): pass return "done" load_random_btn.click(load_random, {text, textb}, text2) # manual progress def clean_imgs(text, progress=gr.Progress()): progress(0.2, desc="Collecting Images") time.sleep(1) progress(0.5, desc="Cleaning Images") time.sleep(1.5) progress(0.8, desc="Sending Images") time.sleep(1.5) return "done" clean_imgs_btn.click(clean_imgs, text, text2) # no progress def wait(text): time.sleep(4) return "done" wait_btn.click(wait, text, text2) # multiple progressions def do_all(data, progress=gr.Progress()): load_set(data[text], data[textb], progress) load_random(data, progress) clean_imgs(data[text], progress) progress(None) wait(text) return "done" do_all_btn.click(do_all, {text, textb}, text2) def track_tqdm(data, progress=gr.Progress(track_tqdm=True)): for i in tqdm.tqdm(range(5), desc="outer"): for j in tqdm.tqdm(range(4), desc="inner"): time.sleep(1) return "done" track_tqdm_btn.click(track_tqdm, {text, textb}, text2) def bind_internal_tqdm(data, progress=gr.Progress(track_tqdm=True)): outdir = "__tmp/" + str(uuid4()) load_dataset("beans", split="train", cache_dir=outdir) shutil.rmtree(outdir) return "done" bind_internal_tqdm_btn.click(bind_internal_tqdm, {text, textb}, text2) if __name__ == "__main__": demo.launch() ```
gradio-app/gradio/blob/main/demo/progress/run.ipynb
``python import argparse import os import torch from torch.optim import AdamW from torch.utils.data import DataLoader import peft import evaluate from datasets import load_dataset from transformers import AutoModelForSequenceClassification, AutoTokenizer, get_linear_schedule_with_warmup, set_seed from tqdm import tqdm ``` ```python batch_size = 8 model_name_or_path = "roberta-large" task = "mrpc" peft_type = peft.PeftType.IA3 device = "cuda" num_epochs = 12 ``` ```python # peft_config = LoraConfig(task_type="SEQ_CLS", inference_mode=False, r=8, lora_alpha=16, lora_dropout=0.1) peft_config = peft.IA3Config(task_type="SEQ_CLS", inference_mode=False) lr = 1e-3 ``` ```python if any(k in model_name_or_path for k in ("gpt", "opt", "bloom")): padding_side = "left" else: padding_side = "right" tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, padding_side=padding_side) if getattr(tokenizer, "pad_token_id") is None: tokenizer.pad_token_id = tokenizer.eos_token_id datasets = load_dataset("glue", task) metric = evaluate.load("glue", task) def tokenize_function(examples): # max_length=None => use the model max length (it's actually the default) outputs = tokenizer(examples["sentence1"], examples["sentence2"], truncation=True, max_length=None) return outputs tokenized_datasets = datasets.map( tokenize_function, batched=True, remove_columns=["idx", "sentence1", "sentence2"], ) # We also rename the 'label' column to 'labels' which is the expected name for labels by the models of the # transformers library tokenized_datasets = tokenized_datasets.rename_column("label", "labels") def collate_fn(examples): return tokenizer.pad(examples, padding="longest", return_tensors="pt") # Instantiate dataloaders. train_dataloader = DataLoader(tokenized_datasets["train"], shuffle=True, collate_fn=collate_fn, batch_size=batch_size) eval_dataloader = DataLoader( tokenized_datasets["validation"], shuffle=False, collate_fn=collate_fn, batch_size=batch_size ) test_dataloader = DataLoader(tokenized_datasets["test"], shuffle=False, collate_fn=collate_fn, batch_size=batch_size) ``` ```python model = AutoModelForSequenceClassification.from_pretrained(model_name_or_path, return_dict=True) model = peft.get_peft_model(model, peft_config) model.print_trainable_parameters() model ``` ```python optimizer = AdamW(params=model.parameters(), lr=lr) # Instantiate scheduler lr_scheduler = get_linear_schedule_with_warmup( optimizer=optimizer, num_warmup_steps=0.06 * (len(train_dataloader) * num_epochs), num_training_steps=(len(train_dataloader) * num_epochs), ) ``` ```python model.to(device) for epoch in range(num_epochs): model.train() for step, batch in enumerate(tqdm(train_dataloader)): batch.to(device) outputs = model(**batch) loss = outputs.loss loss.backward() optimizer.step() lr_scheduler.step() optimizer.zero_grad() model.eval() for step, batch in enumerate(tqdm(eval_dataloader)): batch.to(device) with torch.no_grad(): outputs = model(**batch) predictions = outputs.logits.argmax(dim=-1) predictions, references = predictions, batch["labels"] metric.add_batch( predictions=predictions, references=references, ) eval_metric = metric.compute() print(f"epoch {epoch}:", eval_metric) ``` ## Share adapters on the 🤗 Hub ```python model.push_to_hub("SumanthRH/roberta-large-peft-ia3", use_auth_token=True) ``` ## Load adapters from the Hub You can also directly load adapters from the Hub using the commands below: ```python import torch from peft import PeftModel, PeftConfig from transformers import AutoModelForCausalLM, AutoTokenizer peft_model_id = "SumanthRH/roberta-large-peft-ia3" config = PeftConfig.from_pretrained(peft_model_id) inference_model = AutoModelForSequenceClassification.from_pretrained(config.base_model_name_or_path) tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path) # Load the Lora model inference_model = PeftModel.from_pretrained(inference_model, peft_model_id) inference_model.to(device) inference_model.eval() for step, batch in enumerate(tqdm(eval_dataloader)): batch.to(device) with torch.no_grad(): outputs = inference_model(**batch) predictions = outputs.logits.argmax(dim=-1) predictions, references = predictions, batch["labels"] metric.add_batch( predictions=predictions, references=references, ) eval_metric = metric.compute() print(eval_metric) ```
huggingface/peft/blob/main/examples/sequence_classification/IA3.ipynb
!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # Textual Inversion Textual Inversion is a training method for personalizing models by learning new text embeddings from a few example images. The file produced from training is extremely small (a few KBs) and the new embeddings can be loaded into the text encoder. [`TextualInversionLoaderMixin`] provides a function for loading Textual Inversion embeddings from Diffusers and Automatic1111 into the text encoder and loading a special token to activate the embeddings. <Tip> To learn more about how to load Textual Inversion embeddings, see the [Textual Inversion](../../using-diffusers/loading_adapters#textual-inversion) loading guide. </Tip> ## TextualInversionLoaderMixin [[autodoc]] loaders.textual_inversion.TextualInversionLoaderMixin
huggingface/diffusers/blob/main/docs/source/en/api/loaders/textual_inversion.md
Malware Scanning We run every file of your repositories through a [malware scanner](https://www.clamav.net/). Scanning is triggered at each commit or when you visit a repository page. Here is an [example view](https://huggingface.co/mcpotato/42-eicar-street/tree/main) of an infected file: <div class="flex justify-center"> <img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/eicar-hub-file-view.png"/> <img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/eicar-hub-file-view-dark.png"/> </div> <Tip> If your file has neither an ok nor infected badge, it could mean that it is either currently being scanned, waiting to be scanned, or that there was an error during the scan. It can take up to a few minutes to be scanned. </Tip> If at least one file has a been scanned as unsafe, a message will warn the users: <div class="flex justify-center"> <img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/eicar-hub-model.png"/> <img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/eicar-hub-model-dark.png"/> </div> <div class="flex justify-center"> <img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/eicar-hub-tree-view.png"/> <img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/eicar-hub-tree-view-dark.png"/> </div> <Tip> As the repository owner, we advise you to remove the suspicious file. The repository will appear back as safe. </Tip>
huggingface/hub-docs/blob/main/docs/hub/security-malware.md
-- title: "🐶Safetensors audited as really safe and becoming the default" thumbnail: /blog/assets/142_safetensors_official/thumbnail.png authors: - user: Narsil - user: stellaathena guest: true --- # Audit shows that safetensors is safe and ready to become the default [Hugging Face](https://huggingface.co/), in close collaboration with [EleutherAI](https://www.eleuther.ai/) and [Stability AI](https://stability.ai/), has ordered an external security audit of the `safetensors` library, the results of which allow all three organizations to move toward making the library the default format for saved models. The full results of the security audit, performed by [Trail of Bits](https://www.trailofbits.com/), can be found here: [Report](https://huggingface.co/datasets/safetensors/trail_of_bits_audit_repot/resolve/main/SOW-TrailofBits-EleutherAI_HuggingFace-v1.2.pdf). The following blog post explains the origins of the library, why these audit results are important, and the next steps. ## What is safetensors? 🐶[Safetensors](https://github.com/huggingface/safetensors) is a library for saving and loading tensors in the most common frameworks (including PyTorch, TensorFlow, JAX, PaddlePaddle, and NumPy). For a more concrete explanation, we'll use PyTorch. ```python import torch from safetensors.torch import load_file, save_file weights = {"embeddings": torch.zeros((10, 100))} save_file(weights, "model.safetensors") weights2 = load_file("model.safetensors") ``` It also has a number of [cool features](https://github.com/huggingface/safetensors#yet-another-format-) compared to other formats, most notably that loading files is _safe_, as we'll see later. When you're using `transformers`, if `safetensors` is installed, then those files will already be used preferentially in order to prevent issues, which means that ``` pip install safetensors ``` is likely to be the only thing needed to run `safetensors` files safely. Going forward and thanks to the validation of the library, `safetensors` will now be installed in `transformers` by default. The next step is saving models in `safetensors` by default. We are thrilled to see that the `safetensors` library is already seeing use in the ML ecosystem, including: - [Civitai](https://civitai.com/) - [Stable Diffusion Web UI](https://github.com/AUTOMATIC1111/stable-diffusion-webui) - [dfdx](https://github.com/coreylowman/dfdx) - [LLaMA.cpp](https://github.com/ggerganov/llama.cpp/blob/e6a46b0ed1884c77267dc70693183e3b7164e0e0/convert.py#L537) ## Why create something new? The creation of this library was driven by the fact that PyTorch uses `pickle` under the hood, which is inherently unsafe. (Sources: [1](https://huggingface.co/docs/hub/security-pickle), [2, video](https://www.youtube.com/watch?v=2ethDz9KnLk), [3](https://github.com/pytorch/pytorch/issues/52596)) With pickle, it is possible to write a malicious file posing as a model that gives full control of a user's computer to an attacker without the user's knowledge, allowing the attacker to steal all their bitcoins 😓. While this vulnerability in pickle is widely known in the computer security world (and is acknowledged in the PyTorch [docs](https://pytorch.org/docs/stable/generated/torch.load.html)), it’s not common knowledge in the broader ML community. Since the Hugging Face Hub is a platform where anyone can upload and share models, it is important to make efforts to prevent users from getting infected by malware. We are also taking steps to make sure the existing PyTorch files are not malicious, but the best we can do is flag suspicious-looking files. Of course, there are other file formats out there, but none seemed to meet the full set of [ideal requirements](https://github.com/huggingface/safetensors#yet-another-format-) our team identified. In addition to being safe, `safetensors` allows lazy loading and generally faster loads (around 100x faster on CPU). Lazy loading means loading only part of a tensor in an efficient manner. This particular feature enables arbitrary sharding with efficient inference libraries, such as [text-generation-inference](https://github.com/huggingface/text-generation-inference), to load LLMs (such as LLaMA, StarCoder, etc.) on various types of hardware with maximum efficiency. Because it loads so fast and is framework agnostic, we can even use the format to load models from the same file in PyTorch or TensorFlow. ## The security audit Since `safetensors` main asset is providing safety guarantees, we wanted to make sure it actually delivered. That's why Hugging Face, EleutherAI, and Stability AI teamed up to get an external security audit to confirm it. Important findings: - No critical security flaw leading to arbitrary code execution was found. - Some imprecisions in the spec format were detected and fixed. - Some missing validation allowed [polyglot files](https://en.wikipedia.org/wiki/Polyglot_(computing)), which was fixed. - Lots of improvements to the test suite were proposed and implemented. In the name of openness and transparency, all companies agreed to make the report fully public. [Full report](https://huggingface.co/datasets/safetensors/trail_of_bits_audit_repot/resolve/main/SOW-TrailofBits-EleutherAI_HuggingFace-v1.2.pdf) One import thing to note is that the library is written in Rust. This adds an extra layer of [security](https://doc.rust-lang.org/rustc/exploit-mitigations.html) coming directly from the language itself. While it is impossible to prove the absence of flaws, this is a major step in giving reassurance that `safetensors` is indeed safe to use. ## Going forward For Hugging Face, EleutherAI, and Stability AI, the master plan is to shift to using this format by default. EleutherAI has added support for evaluating models stored as `safetensors` in their LM Evaluation Harness and is working on supporting the format in their GPT-NeoX distributed training library. Within the `transformers` library we are doing the following: - Create `safetensors`. - Verify it works and can deliver on all promises (lazy load for LLMs, single file for all frameworks, faster loads). - Verify it's safe. (This is today's announcement.) - Make `safetensors` a core dependency. (This is already done or soon to come.) - Make `safetensors` the default saving format. This will happen in a few months when we have enough feedback to make sure it will cause as little disruption as possible and enough users already have the library to be able to load new models even on relatively old `transformers` versions. As for `safetensors` itself, we're looking into adding more advanced features for LLM training, which has its own set of issues with current formats. Finally, we plan to release a `1.0` in the near future, with the large user base of `transformers` providing the final testing step. The format and the lib have had very few modifications since their inception, which is a good sign of stability. We're glad we can bring ML one step closer to being safe and efficient for all!
huggingface/blog/blob/main/safetensors-security-audit.md
-- title: "Federated Learning using Hugging Face and Flower" thumbnail: /blog/assets/fl-with-flower/thumbnail.png authors: - user: charlesbvll guest: true --- # Federated Learning using Hugging Face and Flower <a target="_blank" href="https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/fl-with-flower.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> This tutorial will show how to leverage Hugging Face to federate the training of language models over multiple clients using [Flower](https://flower.dev/). More specifically, we will fine-tune a pre-trained Transformer model (distilBERT) for sequence classification over a dataset of IMDB ratings. The end goal is to detect if a movie rating is positive or negative. A notebook is also available [here](https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/fl-with-flower.ipynb) but instead of running on multiple separate clients it utilizes the simulation functionality of Flower (using `flwr['simulation']`) in order to emulate a federated setting inside Google Colab (this also means that instead of calling `start_server` we will call `start_simulation`, and that a few other modifications are needed). ## Dependencies To follow along this tutorial you will need to install the following packages: `datasets`, `evaluate`, `flwr`, `torch`, and `transformers`. This can be done using `pip`: ```sh pip install datasets evaluate flwr torch transformers ``` ## Standard Hugging Face workflow ### Handling the data To fetch the IMDB dataset, we will use Hugging Face's `datasets` library. We then need to tokenize the data and create `PyTorch` dataloaders, this is all done in the `load_data` function: ```python import random import torch from datasets import load_dataset from torch.utils.data import DataLoader from transformers import AutoTokenizer, DataCollatorWithPadding DEVICE = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") CHECKPOINT = "distilbert-base-uncased" def load_data(): """Load IMDB data (training and eval)""" raw_datasets = load_dataset("imdb") raw_datasets = raw_datasets.shuffle(seed=42) # remove unnecessary data split del raw_datasets["unsupervised"] tokenizer = AutoTokenizer.from_pretrained(CHECKPOINT) def tokenize_function(examples): return tokenizer(examples["text"], truncation=True) # We will take a small sample in order to reduce the compute time, this is optional train_population = random.sample(range(len(raw_datasets["train"])), 100) test_population = random.sample(range(len(raw_datasets["test"])), 100) tokenized_datasets = raw_datasets.map(tokenize_function, batched=True) tokenized_datasets["train"] = tokenized_datasets["train"].select(train_population) tokenized_datasets["test"] = tokenized_datasets["test"].select(test_population) tokenized_datasets = tokenized_datasets.remove_columns("text") tokenized_datasets = tokenized_datasets.rename_column("label", "labels") data_collator = DataCollatorWithPadding(tokenizer=tokenizer) trainloader = DataLoader( tokenized_datasets["train"], shuffle=True, batch_size=32, collate_fn=data_collator, ) testloader = DataLoader( tokenized_datasets["test"], batch_size=32, collate_fn=data_collator ) return trainloader, testloader trainloader, testloader = load_data() ``` ### Training and testing the model Once we have a way of creating our trainloader and testloader, we can take care of the training and testing. This is very similar to any `PyTorch` training or testing loop: ```python from evaluate import load as load_metric from transformers import AdamW def train(net, trainloader, epochs): optimizer = AdamW(net.parameters(), lr=5e-5) net.train() for _ in range(epochs): for batch in trainloader: batch = {k: v.to(DEVICE) for k, v in batch.items()} outputs = net(**batch) loss = outputs.loss loss.backward() optimizer.step() optimizer.zero_grad() def test(net, testloader): metric = load_metric("accuracy") loss = 0 net.eval() for batch in testloader: batch = {k: v.to(DEVICE) for k, v in batch.items()} with torch.no_grad(): outputs = net(**batch) logits = outputs.logits loss += outputs.loss.item() predictions = torch.argmax(logits, dim=-1) metric.add_batch(predictions=predictions, references=batch["labels"]) loss /= len(testloader.dataset) accuracy = metric.compute()["accuracy"] return loss, accuracy ``` ### Creating the model itself To create the model itself, we will just load the pre-trained distillBERT model using Hugging Face’s `AutoModelForSequenceClassification` : ```python from transformers import AutoModelForSequenceClassification net = AutoModelForSequenceClassification.from_pretrained( CHECKPOINT, num_labels=2 ).to(DEVICE) ``` ## Federating the example The idea behind Federated Learning is to train a model between multiple clients and a server without having to share any data. This is done by letting each client train the model locally on its data and send its parameters back to the server, which then aggregates all the clients’ parameters together using a predefined strategy. This process is made very simple by using the [Flower](https://github.com/adap/flower) framework. If you want a more complete overview, be sure to check out this guide: [What is Federated Learning?](https://flower.dev/docs/tutorial/Flower-0-What-is-FL.html) ### Creating the IMDBClient To federate our example to multiple clients, we first need to write our Flower client class (inheriting from `flwr.client.NumPyClient`). This is very easy, as our model is a standard `PyTorch` model: ```python from collections import OrderedDict import flwr as fl class IMDBClient(fl.client.NumPyClient): def get_parameters(self, config): return [val.cpu().numpy() for _, val in net.state_dict().items()] def set_parameters(self, parameters): params_dict = zip(net.state_dict().keys(), parameters) state_dict = OrderedDict({k: torch.Tensor(v) for k, v in params_dict}) net.load_state_dict(state_dict, strict=True) def fit(self, parameters, config): self.set_parameters(parameters) print("Training Started...") train(net, trainloader, epochs=1) print("Training Finished.") return self.get_parameters(config={}), len(trainloader), {} def evaluate(self, parameters, config): self.set_parameters(parameters) loss, accuracy = test(net, testloader) return float(loss), len(testloader), {"accuracy": float(accuracy)} ``` The `get_parameters` function lets the server get the client's parameters. Inversely, the `set_parameters` function allows the server to send its parameters to the client. Finally, the `fit` function trains the model locally for the client, and the `evaluate` function tests the model locally and returns the relevant metrics. We can now start client instances using: ```python fl.client.start_numpy_client(server_address="127.0.0.1:8080", client=IMDBClient()) ``` ### Starting the server Now that we have a way to instantiate clients, we need to create our server in order to aggregate the results. Using Flower, this can be done very easily by first choosing a strategy (here, we are using `FedAvg`, which will define the global weights as the average of all the clients' weights at each round) and then using the `flwr.server.start_server` function: ```python def weighted_average(metrics): accuracies = [num_examples * m["accuracy"] for num_examples, m in metrics] losses = [num_examples * m["loss"] for num_examples, m in metrics] examples = [num_examples for num_examples, _ in metrics] return {"accuracy": sum(accuracies) / sum(examples), "loss": sum(losses) / sum(examples)} # Define strategy strategy = fl.server.strategy.FedAvg( fraction_fit=1.0, fraction_evaluate=1.0, evaluate_metrics_aggregation_fn=weighted_average, ) # Start server fl.server.start_server( server_address="0.0.0.0:8080", config=fl.server.ServerConfig(num_rounds=3), strategy=strategy, ) ``` The `weighted_average` function is there to provide a way to aggregate the metrics distributed amongst the clients (basically this allows us to display a nice average accuracy and loss for every round). ## Putting everything together If you want to check out everything put together, you should check out the code example we wrote for the Flower repo: [https://github.com/adap/flower/tree/main/examples/quickstart_huggingface](https://github.com/adap/flower/tree/main/examples/quickstart_huggingface). Of course, this is a very basic example, and a lot can be added or modified, it was just to showcase how simply we could federate a Hugging Face workflow using Flower. Note that in this example we used `PyTorch`, but we could have very well used `TensorFlow`.
huggingface/blog/blob/main/fl-with-flower.md
!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # BertJapanese ## Overview The BERT models trained on Japanese text. There are models with two different tokenization methods: - Tokenize with MeCab and WordPiece. This requires some extra dependencies, [fugashi](https://github.com/polm/fugashi) which is a wrapper around [MeCab](https://taku910.github.io/mecab/). - Tokenize into characters. To use *MecabTokenizer*, you should `pip install transformers["ja"]` (or `pip install -e .["ja"]` if you install from source) to install dependencies. See [details on cl-tohoku repository](https://github.com/cl-tohoku/bert-japanese). Example of using a model with MeCab and WordPiece tokenization: ```python >>> import torch >>> from transformers import AutoModel, AutoTokenizer >>> bertjapanese = AutoModel.from_pretrained("cl-tohoku/bert-base-japanese") >>> tokenizer = AutoTokenizer.from_pretrained("cl-tohoku/bert-base-japanese") >>> ## Input Japanese Text >>> line = "吾輩は猫である。" >>> inputs = tokenizer(line, return_tensors="pt") >>> print(tokenizer.decode(inputs["input_ids"][0])) [CLS] 吾輩 は 猫 で ある 。 [SEP] >>> outputs = bertjapanese(**inputs) ``` Example of using a model with Character tokenization: ```python >>> bertjapanese = AutoModel.from_pretrained("cl-tohoku/bert-base-japanese-char") >>> tokenizer = AutoTokenizer.from_pretrained("cl-tohoku/bert-base-japanese-char") >>> ## Input Japanese Text >>> line = "吾輩は猫である。" >>> inputs = tokenizer(line, return_tensors="pt") >>> print(tokenizer.decode(inputs["input_ids"][0])) [CLS] 吾 輩 は 猫 で あ る 。 [SEP] >>> outputs = bertjapanese(**inputs) ``` This model was contributed by [cl-tohoku](https://huggingface.co/cl-tohoku). <Tip> This implementation is the same as BERT, except for tokenization method. Refer to [BERT documentation](bert) for API reference information. </Tip> ## BertJapaneseTokenizer [[autodoc]] BertJapaneseTokenizer
huggingface/transformers/blob/main/docs/source/en/model_doc/bert-japanese.md
Tokenizers, check![[tokenizers-check]] <CourseFloatingBanner chapter={6} classNames="absolute z-10 right-0 top-0" /> Great job finishing this chapter! After this deep dive into tokenizers, you should: - Be able to train a new tokenizer using an old one as a template - Understand how to use offsets to map tokens' positions to their original span of text - Know the differences between BPE, WordPiece, and Unigram - Be able to mix and match the blocks provided by the 🤗 Tokenizers library to build your own tokenizer - Be able to use that tokenizer inside the 🤗 Transformers library
huggingface/course/blob/main/chapters/en/chapter6/9.mdx
!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # LiLT ## Overview The LiLT model was proposed in [LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding](https://arxiv.org/abs/2202.13669) by Jiapeng Wang, Lianwen Jin, Kai Ding. LiLT allows to combine any pre-trained RoBERTa text encoder with a lightweight Layout Transformer, to enable [LayoutLM](layoutlm)-like document understanding for many languages. The abstract from the paper is the following: *Structured document understanding has attracted considerable attention and made significant progress recently, owing to its crucial role in intelligent document processing. However, most existing related models can only deal with the document data of specific language(s) (typically English) included in the pre-training collection, which is extremely limited. To address this issue, we propose a simple yet effective Language-independent Layout Transformer (LiLT) for structured document understanding. LiLT can be pre-trained on the structured documents of a single language and then directly fine-tuned on other languages with the corresponding off-the-shelf monolingual/multilingual pre-trained textual models. Experimental results on eight languages have shown that LiLT can achieve competitive or even superior performance on diverse widely-used downstream benchmarks, which enables language-independent benefit from the pre-training of document layout structure.* <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/lilt_architecture.jpg" alt="drawing" width="600"/> <small> LiLT architecture. Taken from the <a href="https://arxiv.org/abs/2202.13669">original paper</a>. </small> This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/jpwang/lilt). ## Usage tips - To combine the Language-Independent Layout Transformer with a new RoBERTa checkpoint from the [hub](https://huggingface.co/models?search=roberta), refer to [this guide](https://github.com/jpWang/LiLT#or-generate-your-own-checkpoint-optional). The script will result in `config.json` and `pytorch_model.bin` files being stored locally. After doing this, one can do the following (assuming you're logged in with your HuggingFace account): ``` from transformers import LiltModel model = LiltModel.from_pretrained("path_to_your_files") model.push_to_hub("name_of_repo_on_the_hub") ``` - When preparing data for the model, make sure to use the token vocabulary that corresponds to the RoBERTa checkpoint you combined with the Layout Transformer. - As [lilt-roberta-en-base](https://huggingface.co/SCUT-DLVCLab/lilt-roberta-en-base) uses the same vocabulary as [LayoutLMv3](layoutlmv3), one can use [`LayoutLMv3TokenizerFast`] to prepare data for the model. The same is true for [lilt-roberta-en-base](https://huggingface.co/SCUT-DLVCLab/lilt-infoxlm-base): one can use [`LayoutXLMTokenizerFast`] for that model. ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with LiLT. - Demo notebooks for LiLT can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/LiLT). **Documentation resources** - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## LiltConfig [[autodoc]] LiltConfig ## LiltModel [[autodoc]] LiltModel - forward ## LiltForSequenceClassification [[autodoc]] LiltForSequenceClassification - forward ## LiltForTokenClassification [[autodoc]] LiltForTokenClassification - forward ## LiltForQuestionAnswering [[autodoc]] LiltForQuestionAnswering - forward
huggingface/transformers/blob/main/docs/source/en/model_doc/lilt.md
!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Fine-tune a pretrained model [[open-in-colab]] There are significant benefits to using a pretrained model. It reduces computation costs, your carbon footprint, and allows you to use state-of-the-art models without having to train one from scratch. 🤗 Transformers provides access to thousands of pretrained models for a wide range of tasks. When you use a pretrained model, you train it on a dataset specific to your task. This is known as fine-tuning, an incredibly powerful training technique. In this tutorial, you will fine-tune a pretrained model with a deep learning framework of your choice: * Fine-tune a pretrained model with 🤗 Transformers [`Trainer`]. * Fine-tune a pretrained model in TensorFlow with Keras. * Fine-tune a pretrained model in native PyTorch. <a id='data-processing'></a> ## Prepare a dataset <Youtube id="_BZearw7f0w"/> Before you can fine-tune a pretrained model, download a dataset and prepare it for training. The previous tutorial showed you how to process data for training, and now you get an opportunity to put those skills to the test! Begin by loading the [Yelp Reviews](https://huggingface.co/datasets/yelp_review_full) dataset: ```py >>> from datasets import load_dataset >>> dataset = load_dataset("yelp_review_full") >>> dataset["train"][100] {'label': 0, 'text': 'My expectations for McDonalds are t rarely high. But for one to still fail so spectacularly...that takes something special!\\nThe cashier took my friends\'s order, then promptly ignored me. I had to force myself in front of a cashier who opened his register to wait on the person BEHIND me. I waited over five minutes for a gigantic order that included precisely one kid\'s meal. After watching two people who ordered after me be handed their food, I asked where mine was. The manager started yelling at the cashiers for \\"serving off their orders\\" when they didn\'t have their food. But neither cashier was anywhere near those controls, and the manager was the one serving food to customers and clearing the boards.\\nThe manager was rude when giving me my order. She didn\'t make sure that I had everything ON MY RECEIPT, and never even had the decency to apologize that I felt I was getting poor service.\\nI\'ve eaten at various McDonalds restaurants for over 30 years. I\'ve worked at more than one location. I expect bad days, bad moods, and the occasional mistake. But I have yet to have a decent experience at this store. It will remain a place I avoid unless someone in my party needs to avoid illness from low blood sugar. Perhaps I should go back to the racially biased service of Steak n Shake instead!'} ``` As you now know, you need a tokenizer to process the text and include a padding and truncation strategy to handle any variable sequence lengths. To process your dataset in one step, use 🤗 Datasets [`map`](https://huggingface.co/docs/datasets/process#map) method to apply a preprocessing function over the entire dataset: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") >>> def tokenize_function(examples): ... return tokenizer(examples["text"], padding="max_length", truncation=True) >>> tokenized_datasets = dataset.map(tokenize_function, batched=True) ``` If you like, you can create a smaller subset of the full dataset to fine-tune on to reduce the time it takes: ```py >>> small_train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(1000)) >>> small_eval_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(1000)) ``` <a id='trainer'></a> ## Train At this point, you should follow the section corresponding to the framework you want to use. You can use the links in the right sidebar to jump to the one you want - and if you want to hide all of the content for a given framework, just use the button at the top-right of that framework's block! <frameworkcontent> <pt> <Youtube id="nvBXf7s7vTI"/> ## Train with PyTorch Trainer 🤗 Transformers provides a [`Trainer`] class optimized for training 🤗 Transformers models, making it easier to start training without manually writing your own training loop. The [`Trainer`] API supports a wide range of training options and features such as logging, gradient accumulation, and mixed precision. Start by loading your model and specify the number of expected labels. From the Yelp Review [dataset card](https://huggingface.co/datasets/yelp_review_full#data-fields), you know there are five labels: ```py >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=5) ``` <Tip> You will see a warning about some of the pretrained weights not being used and some weights being randomly initialized. Don't worry, this is completely normal! The pretrained head of the BERT model is discarded, and replaced with a randomly initialized classification head. You will fine-tune this new model head on your sequence classification task, transferring the knowledge of the pretrained model to it. </Tip> ### Training hyperparameters Next, create a [`TrainingArguments`] class which contains all the hyperparameters you can tune as well as flags for activating different training options. For this tutorial you can start with the default training [hyperparameters](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments), but feel free to experiment with these to find your optimal settings. Specify where to save the checkpoints from your training: ```py >>> from transformers import TrainingArguments >>> training_args = TrainingArguments(output_dir="test_trainer") ``` ### Evaluate [`Trainer`] does not automatically evaluate model performance during training. You'll need to pass [`Trainer`] a function to compute and report metrics. The [🤗 Evaluate](https://huggingface.co/docs/evaluate/index) library provides a simple [`accuracy`](https://huggingface.co/spaces/evaluate-metric/accuracy) function you can load with the [`evaluate.load`] (see this [quicktour](https://huggingface.co/docs/evaluate/a_quick_tour) for more information) function: ```py >>> import numpy as np >>> import evaluate >>> metric = evaluate.load("accuracy") ``` Call [`~evaluate.compute`] on `metric` to calculate the accuracy of your predictions. Before passing your predictions to `compute`, you need to convert the logits to predictions (remember all 🤗 Transformers models return logits): ```py >>> def compute_metrics(eval_pred): ... logits, labels = eval_pred ... predictions = np.argmax(logits, axis=-1) ... return metric.compute(predictions=predictions, references=labels) ``` If you'd like to monitor your evaluation metrics during fine-tuning, specify the `evaluation_strategy` parameter in your training arguments to report the evaluation metric at the end of each epoch: ```py >>> from transformers import TrainingArguments, Trainer >>> training_args = TrainingArguments(output_dir="test_trainer", evaluation_strategy="epoch") ``` ### Trainer Create a [`Trainer`] object with your model, training arguments, training and test datasets, and evaluation function: ```py >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=small_train_dataset, ... eval_dataset=small_eval_dataset, ... compute_metrics=compute_metrics, ... ) ``` Then fine-tune your model by calling [`~transformers.Trainer.train`]: ```py >>> trainer.train() ``` </pt> <tf> <a id='keras'></a> <Youtube id="rnTGBy2ax1c"/> ## Train a TensorFlow model with Keras You can also train 🤗 Transformers models in TensorFlow with the Keras API! ### Loading data for Keras When you want to train a 🤗 Transformers model with the Keras API, you need to convert your dataset to a format that Keras understands. If your dataset is small, you can just convert the whole thing to NumPy arrays and pass it to Keras. Let's try that first before we do anything more complicated. First, load a dataset. We'll use the CoLA dataset from the [GLUE benchmark](https://huggingface.co/datasets/glue), since it's a simple binary text classification task, and just take the training split for now. ```py from datasets import load_dataset dataset = load_dataset("glue", "cola") dataset = dataset["train"] # Just take the training split for now ``` Next, load a tokenizer and tokenize the data as NumPy arrays. Note that the labels are already a list of 0 and 1s, so we can just convert that directly to a NumPy array without tokenization! ```py from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") tokenized_data = tokenizer(dataset["sentence"], return_tensors="np", padding=True) # Tokenizer returns a BatchEncoding, but we convert that to a dict for Keras tokenized_data = dict(tokenized_data) labels = np.array(dataset["label"]) # Label is already an array of 0 and 1 ``` Finally, load, [`compile`](https://keras.io/api/models/model_training_apis/#compile-method), and [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) the model. Note that Transformers models all have a default task-relevant loss function, so you don't need to specify one unless you want to: ```py from transformers import TFAutoModelForSequenceClassification from tensorflow.keras.optimizers import Adam # Load and compile our model model = TFAutoModelForSequenceClassification.from_pretrained("bert-base-cased") # Lower learning rates are often better for fine-tuning transformers model.compile(optimizer=Adam(3e-5)) # No loss argument! model.fit(tokenized_data, labels) ``` <Tip> You don't have to pass a loss argument to your models when you `compile()` them! Hugging Face models automatically choose a loss that is appropriate for their task and model architecture if this argument is left blank. You can always override this by specifying a loss yourself if you want to! </Tip> This approach works great for smaller datasets, but for larger datasets, you might find it starts to become a problem. Why? Because the tokenized array and labels would have to be fully loaded into memory, and because NumPy doesn’t handle “jagged” arrays, so every tokenized sample would have to be padded to the length of the longest sample in the whole dataset. That’s going to make your array even bigger, and all those padding tokens will slow down training too! ### Loading data as a tf.data.Dataset If you want to avoid slowing down training, you can load your data as a `tf.data.Dataset` instead. Although you can write your own `tf.data` pipeline if you want, we have two convenience methods for doing this: - [`~TFPreTrainedModel.prepare_tf_dataset`]: This is the method we recommend in most cases. Because it is a method on your model, it can inspect the model to automatically figure out which columns are usable as model inputs, and discard the others to make a simpler, more performant dataset. - [`~datasets.Dataset.to_tf_dataset`]: This method is more low-level, and is useful when you want to exactly control how your dataset is created, by specifying exactly which `columns` and `label_cols` to include. Before you can use [`~TFPreTrainedModel.prepare_tf_dataset`], you will need to add the tokenizer outputs to your dataset as columns, as shown in the following code sample: ```py def tokenize_dataset(data): # Keys of the returned dictionary will be added to the dataset as columns return tokenizer(data["text"]) dataset = dataset.map(tokenize_dataset) ``` Remember that Hugging Face datasets are stored on disk by default, so this will not inflate your memory usage! Once the columns have been added, you can stream batches from the dataset and add padding to each batch, which greatly reduces the number of padding tokens compared to padding the entire dataset. ```py >>> tf_dataset = model.prepare_tf_dataset(dataset["train"], batch_size=16, shuffle=True, tokenizer=tokenizer) ``` Note that in the code sample above, you need to pass the tokenizer to `prepare_tf_dataset` so it can correctly pad batches as they're loaded. If all the samples in your dataset are the same length and no padding is necessary, you can skip this argument. If you need to do something more complex than just padding samples (e.g. corrupting tokens for masked language modelling), you can use the `collate_fn` argument instead to pass a function that will be called to transform the list of samples into a batch and apply any preprocessing you want. See our [examples](https://github.com/huggingface/transformers/tree/main/examples) or [notebooks](https://huggingface.co/docs/transformers/notebooks) to see this approach in action. Once you've created a `tf.data.Dataset`, you can compile and fit the model as before: ```py model.compile(optimizer=Adam(3e-5)) # No loss argument! model.fit(tf_dataset) ``` </tf> </frameworkcontent> <a id='pytorch_native'></a> ## Train in native PyTorch <frameworkcontent> <pt> <Youtube id="Dh9CL8fyG80"/> [`Trainer`] takes care of the training loop and allows you to fine-tune a model in a single line of code. For users who prefer to write their own training loop, you can also fine-tune a 🤗 Transformers model in native PyTorch. At this point, you may need to restart your notebook or execute the following code to free some memory: ```py del model del trainer torch.cuda.empty_cache() ``` Next, manually postprocess `tokenized_dataset` to prepare it for training. 1. Remove the `text` column because the model does not accept raw text as an input: ```py >>> tokenized_datasets = tokenized_datasets.remove_columns(["text"]) ``` 2. Rename the `label` column to `labels` because the model expects the argument to be named `labels`: ```py >>> tokenized_datasets = tokenized_datasets.rename_column("label", "labels") ``` 3. Set the format of the dataset to return PyTorch tensors instead of lists: ```py >>> tokenized_datasets.set_format("torch") ``` Then create a smaller subset of the dataset as previously shown to speed up the fine-tuning: ```py >>> small_train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(1000)) >>> small_eval_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(1000)) ``` ### DataLoader Create a `DataLoader` for your training and test datasets so you can iterate over batches of data: ```py >>> from torch.utils.data import DataLoader >>> train_dataloader = DataLoader(small_train_dataset, shuffle=True, batch_size=8) >>> eval_dataloader = DataLoader(small_eval_dataset, batch_size=8) ``` Load your model with the number of expected labels: ```py >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=5) ``` ### Optimizer and learning rate scheduler Create an optimizer and learning rate scheduler to fine-tune the model. Let's use the [`AdamW`](https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html) optimizer from PyTorch: ```py >>> from torch.optim import AdamW >>> optimizer = AdamW(model.parameters(), lr=5e-5) ``` Create the default learning rate scheduler from [`Trainer`]: ```py >>> from transformers import get_scheduler >>> num_epochs = 3 >>> num_training_steps = num_epochs * len(train_dataloader) >>> lr_scheduler = get_scheduler( ... name="linear", optimizer=optimizer, num_warmup_steps=0, num_training_steps=num_training_steps ... ) ``` Lastly, specify `device` to use a GPU if you have access to one. Otherwise, training on a CPU may take several hours instead of a couple of minutes. ```py >>> import torch >>> device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") >>> model.to(device) ``` <Tip> Get free access to a cloud GPU if you don't have one with a hosted notebook like [Colaboratory](https://colab.research.google.com/) or [SageMaker StudioLab](https://studiolab.sagemaker.aws/). </Tip> Great, now you are ready to train! 🥳 ### Training loop To keep track of your training progress, use the [tqdm](https://tqdm.github.io/) library to add a progress bar over the number of training steps: ```py >>> from tqdm.auto import tqdm >>> progress_bar = tqdm(range(num_training_steps)) >>> model.train() >>> for epoch in range(num_epochs): ... for batch in train_dataloader: ... batch = {k: v.to(device) for k, v in batch.items()} ... outputs = model(**batch) ... loss = outputs.loss ... loss.backward() ... optimizer.step() ... lr_scheduler.step() ... optimizer.zero_grad() ... progress_bar.update(1) ``` ### Evaluate Just like how you added an evaluation function to [`Trainer`], you need to do the same when you write your own training loop. But instead of calculating and reporting the metric at the end of each epoch, this time you'll accumulate all the batches with [`~evaluate.add_batch`] and calculate the metric at the very end. ```py >>> import evaluate >>> metric = evaluate.load("accuracy") >>> model.eval() >>> for batch in eval_dataloader: ... batch = {k: v.to(device) for k, v in batch.items()} ... with torch.no_grad(): ... outputs = model(**batch) ... logits = outputs.logits ... predictions = torch.argmax(logits, dim=-1) ... metric.add_batch(predictions=predictions, references=batch["labels"]) >>> metric.compute() ``` </pt> </frameworkcontent> <a id='additional-resources'></a> ## Additional resources For more fine-tuning examples, refer to: - [🤗 Transformers Examples](https://github.com/huggingface/transformers/tree/main/examples) includes scripts to train common NLP tasks in PyTorch and TensorFlow. - [🤗 Transformers Notebooks](notebooks) contains various notebooks on how to fine-tune a model for specific tasks in PyTorch and TensorFlow.
huggingface/transformers/blob/main/docs/source/en/training.md
!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # BART <div class="flex flex-wrap space-x-1"> <a href="https://huggingface.co/models?filter=bart"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-bart-blueviolet"> </a> <a href="https://huggingface.co/spaces/docs-demos/bart-large-mnli"> <img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"> </a> </div> ## Overview The Bart model was proposed in [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461) by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer on 29 Oct, 2019. According to the abstract, - Bart uses a standard seq2seq/machine translation architecture with a bidirectional encoder (like BERT) and a left-to-right decoder (like GPT). - The pretraining task involves randomly shuffling the order of the original sentences and a novel in-filling scheme, where spans of text are replaced with a single mask token. - BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa with comparable training resources on GLUE and SQuAD, achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains of up to 6 ROUGE. This model was contributed by [sshleifer](https://huggingface.co/sshleifer). The authors' code can be found [here](https://github.com/pytorch/fairseq/tree/master/examples/bart). ## Usage tips: - BART is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left. - Sequence-to-sequence model with an encoder and a decoder. Encoder is fed a corrupted version of the tokens, decoder is fed the original tokens (but has a mask to hide the future words like a regular transformers decoder). A composition of the following transformations are applied on the pretraining tasks for the encoder: * mask random tokens (like in BERT) * delete random tokens * mask a span of k tokens with a single mask token (a span of 0 tokens is an insertion of a mask token) * permute sentences * rotate the document to make it start at a specific token ## Implementation Notes - Bart doesn't use `token_type_ids` for sequence classification. Use [`BartTokenizer`] or [`~BartTokenizer.encode`] to get the proper splitting. - The forward pass of [`BartModel`] will create the `decoder_input_ids` if they are not passed. This is different than some other modeling APIs. A typical use case of this feature is mask filling. - Model predictions are intended to be identical to the original implementation when `forced_bos_token_id=0`. This only works, however, if the string you pass to [`fairseq.encode`] starts with a space. - [`~generation.GenerationMixin.generate`] should be used for conditional generation tasks like summarization, see the example in that docstrings. - Models that load the *facebook/bart-large-cnn* weights will not have a `mask_token_id`, or be able to perform mask-filling tasks. ## Mask Filling The `facebook/bart-base` and `facebook/bart-large` checkpoints can be used to fill multi-token masks. ```python from transformers import BartForConditionalGeneration, BartTokenizer model = BartForConditionalGeneration.from_pretrained("facebook/bart-large", forced_bos_token_id=0) tok = BartTokenizer.from_pretrained("facebook/bart-large") example_english_phrase = "UN Chief Says There Is No <mask> in Syria" batch = tok(example_english_phrase, return_tensors="pt") generated_ids = model.generate(batch["input_ids"]) assert tok.batch_decode(generated_ids, skip_special_tokens=True) == [ "UN Chief Says There Is No Plan to Stop Chemical Weapons in Syria" ] ``` ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with BART. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. <PipelineTag pipeline="summarization"/> - A blog post on [Distributed Training: Train BART/T5 for Summarization using 🤗 Transformers and Amazon SageMaker](https://huggingface.co/blog/sagemaker-distributed-training-seq2seq). - A notebook on how to [finetune BART for summarization with fastai using blurr](https://colab.research.google.com/github/ohmeow/ohmeow_website/blob/master/posts/2021-05-25-mbart-sequence-classification-with-blurr.ipynb). 🌎 - A notebook on how to [finetune BART for summarization in two languages with Trainer class](https://colab.research.google.com/github/elsanns/xai-nlp-notebooks/blob/master/fine_tune_bart_summarization_two_langs.ipynb). 🌎 - [`BartForConditionalGeneration`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/summarization) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/summarization.ipynb). - [`TFBartForConditionalGeneration`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/summarization) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/summarization-tf.ipynb). - [`FlaxBartForConditionalGeneration`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/summarization). - An example of how to train [`BartForConditionalGeneration`] with a Hugging Face `datasets` object can be found in this [forum discussion](https://discuss.huggingface.co/t/train-bart-for-conditional-generation-e-g-summarization/1904) - [Summarization](https://huggingface.co/course/chapter7/5?fw=pt#summarization) chapter of the 🤗 Hugging Face course. - [Summarization task guide](../tasks/summarization) <PipelineTag pipeline="fill-mask"/> - [`BartForConditionalGeneration`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#robertabertdistilbert-and-masked-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb). - [`TFBartForConditionalGeneration`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling#run_mlmpy) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb). - [`FlaxBartForConditionalGeneration`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling#masked-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/masked_language_modeling_flax.ipynb). - [Masked language modeling](https://huggingface.co/course/chapter7/3?fw=pt) chapter of the 🤗 Hugging Face Course. - [Masked language modeling task guide](../tasks/masked_language_modeling) <PipelineTag pipeline="translation"/> - A notebook on how to [finetune mBART using Seq2SeqTrainer for Hindi to English translation](https://colab.research.google.com/github/vasudevgupta7/huggingface-tutorials/blob/main/translation_training.ipynb). 🌎 - [`BartForConditionalGeneration`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/translation) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation.ipynb). - [`TFBartForConditionalGeneration`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/translation) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation-tf.ipynb). - [Translation task guide](../tasks/translation) See also: - [Text classification task guide](../tasks/sequence_classification) - [Question answering task guide](../tasks/question_answering) - [Causal language modeling task guide](../tasks/language_modeling) - [Distilled checkpoints](https://huggingface.co/models?search=distilbart) are described in this [paper](https://arxiv.org/abs/2010.13002). ## BartConfig [[autodoc]] BartConfig - all ## BartTokenizer [[autodoc]] BartTokenizer - all ## BartTokenizerFast [[autodoc]] BartTokenizerFast - all <frameworkcontent> <pt> ## BartModel [[autodoc]] BartModel - forward ## BartForConditionalGeneration [[autodoc]] BartForConditionalGeneration - forward ## BartForSequenceClassification [[autodoc]] BartForSequenceClassification - forward ## BartForQuestionAnswering [[autodoc]] BartForQuestionAnswering - forward ## BartForCausalLM [[autodoc]] BartForCausalLM - forward </pt> <tf> ## TFBartModel [[autodoc]] TFBartModel - call ## TFBartForConditionalGeneration [[autodoc]] TFBartForConditionalGeneration - call ## TFBartForSequenceClassification [[autodoc]] TFBartForSequenceClassification - call </tf> <jax> ## FlaxBartModel [[autodoc]] FlaxBartModel - __call__ - encode - decode ## FlaxBartForConditionalGeneration [[autodoc]] FlaxBartForConditionalGeneration - __call__ - encode - decode ## FlaxBartForSequenceClassification [[autodoc]] FlaxBartForSequenceClassification - __call__ - encode - decode ## FlaxBartForQuestionAnswering [[autodoc]] FlaxBartForQuestionAnswering - __call__ - encode - decode ## FlaxBartForCausalLM [[autodoc]] FlaxBartForCausalLM - __call__ </jax> </frameworkcontent>
huggingface/transformers/blob/main/docs/source/en/model_doc/bart.md
et's have a look inside the question answering pipeline. The question answering pipeline can extracts answers to questions from a given context or passage of text, like this part of the Transformers repo README. It also works for very long contexts, even if the answer is at the very end, like in this example. In this video, we will see why! The question answering pipeline follows the same steps as the other pipelines: the question and context are tokenized as a sentence pair, fed to the model then some post-processing is applied. The tokenization and model steps should be familiar. We use the auto class suitable for Question Answering instead of sequence classification, but one key difference with text classification is that our model outputs two tensors named start logits and end logits. Why is that? Well this is the way the model finds the answer to the question. First let's have a look at the model inputs. It's numbers associated with the tokenization of the question followed by the context (with the usual CLS and SEP special tokens). The answer is a part of those tokens. So we ask the model to predict which token starts the answer and which ends the answer. For our two logit outputs, the theoretical labels are the pink and purple vectors. To convert those logits into probabilities, we will need to apply a SoftMax, like in the text classification pipeline. We just mask the tokens that are not part of the context before doing that, leaving the initial CLS token unmasked as we use it to predict an impossible answer. This is what it looks in terms of code. We use a large negative number for the masking, since its exponential will then be 0. Now the probability for each start and end position corresponding to a possible answer, we give a score that is the product of the start probabilities and end probabilities at those positions. Of course, a start index greater than an end index corresponds to an impossible answer. Here is the code to find the best score for a possible answer. Once we have the start and end positions of the tokens, we use the offset mappings provided by our tokenizer to find the span of characters in the initial context, and get our answer! Now, when the context is long, it might get truncated by the tokenizer. This might result in part of the answer, or worse, the whole answer, being truncated. So we don't discard the truncated tokens but build new features with them. Each of those features contains the question, then a chunk of text in the context. If we take disjoint chunks of texts, we might end up with the answer being split between two features. So instead, we take overlapping chunks of texts, to make sure at least one of the chunks will fully contain the answer to the question. The tokenizers do all of this for us automatically with the return overflowing tokens option. The stride argument controls the number of overlapping tokens. Here is how our very long context gets truncated in two features with some overlap. By applying the same post-processing we saw before for each feature, we get the answer with a score for each of them, and we take the answer with the best score as a final solution.
huggingface/course/blob/main/subtitles/en/raw/chapter6/04_question-answering-pipeline-tf.md
Intro Authors: @patrickvonplaten and @lhoestq Aimed at tackling the knowledge-intensive NLP tasks (think tasks a human wouldn't be expected to solve without access to external knowledge sources), RAG models are seq2seq models with access to a retrieval mechanism providing relevant context documents at training and evaluation time. A RAG model encapsulates two core components: a question encoder and a generator. During a forward pass, we encode the input with the question encoder and pass it to the retriever to extract relevant context documents. The documents are then prepended to the input. Such contextualized inputs are passed to the generator. Read more about RAG at https://arxiv.org/abs/2005.11401. # Note ⚠️ This project should be run with pytorch-lightning==1.3.1 which has a potential security vulnerability # Finetuning Our finetuning logic is based on scripts from [`examples/legacy/seq2seq`](https://github.com/huggingface/transformers/tree/main/examples/legacy/seq2seq). We accept training data in the same format as specified there - we expect a directory consisting of 6 text files: ```bash train.source train.target val.source val.target test.source test.target ``` A sample finetuning command (run ` ./examples/research_projects/rag/finetune_rag.py --help` to list all available options): ```bash python examples/research_projects/rag/finetune_rag.py \ --data_dir $DATA_DIR \ --output_dir $OUTPUT_DIR \ --model_name_or_path $MODEL_NAME_OR_PATH \ --model_type rag_sequence \ --fp16 \ --gpus 8 ``` We publish two `base` models which can serve as a starting point for finetuning on downstream tasks (use them as `model_name_or_path`): - [`facebook/rag-sequence-base`](https://huggingface.co/facebook/rag-sequence-base) - a base for finetuning `RagSequenceForGeneration` models, - [`facebook/rag-token-base`](https://huggingface.co/facebook/rag-token-base) - a base for finetuning `RagTokenForGeneration` models. The `base` models initialize the question encoder with [`facebook/dpr-question_encoder-single-nq-base`](https://huggingface.co/facebook/dpr-question_encoder-single-nq-base) and the generator with [`facebook/bart-large`](https://huggingface.co/facebook/bart-large). If you would like to initialize finetuning with a base model using different question encoder and generator architectures, you can build it with a consolidation script, e.g.: ``` python examples/research_projects/rag/consolidate_rag_checkpoint.py \ --model_type rag_sequence \ --generator_name_or_path facebook/bart-large-cnn \ --question_encoder_name_or_path facebook/dpr-question_encoder-single-nq-base \ --dest path/to/checkpoint ``` You will then be able to pass `path/to/checkpoint` as `model_name_or_path` to the `finetune_rag.py` script. ## Document Retrieval When running distributed fine-tuning, each training worker needs to retrieve contextual documents for its input by querying a index loaded into memory. RAG provides two implementations for document retrieval, one with [`torch.distributed`](https://pytorch.org/docs/stable/distributed.html) communication package and the other with [`Ray`](https://docs.ray.io/en/master/). This option can be configured with the `--distributed_retriever` flag which can either be set to `pytorch` or `ray`. By default this flag is set to `pytorch`. For the Pytorch implementation, only training worker 0 loads the index into CPU memory, and a gather/scatter pattern is used to collect the inputs from the other training workers and send back the corresponding document embeddings. For the Ray implementation, the index is loaded in *separate* process(es). The training workers randomly select which retriever worker to query. To use Ray for distributed retrieval, you have to set the `--distributed_retriever` arg to `ray`. To configure the number of retrieval workers (the number of processes that load the index), you can set the `num_retrieval_workers` flag. Also make sure to start the Ray cluster before running fine-tuning. ```bash # Start a single-node Ray cluster. ray start --head python examples/research_projects/rag/finetune_rag.py \ --data_dir $DATA_DIR \ --output_dir $OUTPUT_DIR \ --model_name_or_path $MODEL_NAME_OR_PATH \ --model_type rag_sequence \ --fp16 \ --gpus 8 --distributed_retriever ray \ --num_retrieval_workers 4 # Stop the ray cluster once fine-tuning has finished. ray stop ``` Using Ray can lead to retrieval speedups on multi-GPU settings since multiple processes load the index rather than just the rank 0 training worker. Using Ray also allows you to load the index on GPU since the index is loaded on a separate processes than the model, while with pytorch distributed retrieval, both are loaded in the same process potentially leading to GPU OOM. # Evaluation Our evaluation script enables two modes of evaluation (controlled by the `eval_mode` argument): `e2e` - end2end evaluation, returns EM (exact match) and F1 scores calculated for the downstream task and `retrieval` - which returns precision@k of the documents retrieved for provided inputs. The evaluation script expects paths to two files: - `evaluation_set` - a path to a file specifying the evaluation dataset, a single input per line. - `gold_data_path` - a path to a file contaning ground truth answers for datapoints from the `evaluation_set`, a single output per line. Check below for expected formats of the gold data files. ## Retrieval evaluation For `retrieval` evaluation, we expect a gold data file where each line will consist of a tab-separated list of document titles constituting positive contexts for respective datapoints from the `evaluation_set`. E.g. given a question `who sings does he love me with reba` in the `evaluation_set`, a respective ground truth line could look as follows: ``` Does He Love You Does He Love You Red Sandy Spika dress of Reba McEntire Greatest Hits Volume Two (Reba McEntire album) Shoot for the Moon (album) ``` We demonstrate how to evaluate retrieval against DPR evaluation data. You can download respective files from links listed [here](https://github.com/facebookresearch/DPR/blob/master/data/download_data.py#L39-L45). 1. Download and unzip the gold data file. We use the `biencoder-nq-dev` from https://dl.fbaipublicfiles.com/dpr/data/retriever/biencoder-nq-dev.json.gz. ```bash wget https://dl.fbaipublicfiles.com/dpr/data/retriever/biencoder-nq-dev.json.gz && gzip -d biencoder-nq-dev.json.gz ``` 2. Parse the unziped file using the `parse_dpr_relevance_data.py` ```bash mkdir output # or wherever you want to save this python examples/research_projects/rag/parse_dpr_relevance_data.py \ --src_path biencoder-nq-dev.json \ --evaluation_set output/biencoder-nq-dev.questions \ --gold_data_path output/biencoder-nq-dev.pages ``` 3. Run evaluation: ```bash python examples/research_projects/rag/eval_rag.py \ --model_name_or_path facebook/rag-sequence-nq \ --model_type rag_sequence \ --evaluation_set output/biencoder-nq-dev.questions \ --gold_data_path output/biencoder-nq-dev.pages \ --predictions_path output/retrieval_preds.tsv \ --eval_mode retrieval \ --k 1 ``` ```bash # EXPLANATION python examples/research_projects/rag/eval_rag.py \ --model_name_or_path facebook/rag-sequence-nq \ # model name or path of the model we're evaluating --model_type rag_sequence \ # RAG model type (rag_token or rag_sequence) --evaluation_set output/biencoder-nq-dev.questions \ # an input dataset for evaluation --gold_data_path poutput/biencoder-nq-dev.pages \ # a dataset containing ground truth answers for samples from the evaluation_set --predictions_path output/retrieval_preds.tsv \ # name of file where predictions will be stored --eval_mode retrieval \ # indicates whether we're performing retrieval evaluation or e2e evaluation --k 1 # parameter k for the precision@k metric ``` ## End-to-end evaluation We support two formats of the gold data file (controlled by the `gold_data_mode` parameter): - `qa` - where a single line has the following format: `input [tab] output_list`, e.g.: ``` who is the owner of reading football club ['Xiu Li Dai', 'Dai Yongge', 'Dai Xiuli', 'Yongge Dai'] ``` - `ans` - where a single line contains a single expected answer, e.g.: ``` Xiu Li Dai ``` Predictions of the model for the samples from the `evaluation_set` will be saved under the path specified by the `predictions_path` parameter. If this path already exists, the script will use saved predictions to calculate metrics. Add `--recalculate` parameter to force the script to perform inference from scratch. An example e2e evaluation run could look as follows: ```bash python examples/research_projects/rag/eval_rag.py \ --model_name_or_path facebook/rag-sequence-nq \ --model_type rag_sequence \ --evaluation_set path/to/test.source \ --gold_data_path path/to/gold_data \ --predictions_path path/to/e2e_preds.txt \ --eval_mode e2e \ --gold_data_mode qa \ --n_docs 5 \ # You can experiment with retrieving different number of documents at evaluation time --print_predictions \ --recalculate \ # adding this parameter will force recalculating predictions even if predictions_path already exists ``` # Use your own knowledge source By default, RAG uses the English Wikipedia as a knowledge source, known as the 'wiki_dpr' dataset. With `use_custom_knowledge_dataset.py` you can build your own knowledge source, *e.g.* for RAG. For instance, if documents are serialized as tab-separated csv files with the columns "title" and "text", one can use `use_own_knowledge_dataset.py` as follows: ```bash python examples/research_projects/rag/use_own_knowledge_dataset.py \ --csv_path path/to/my_csv \ --output_dir path/to/my_knowledge_dataset \ ``` The created outputs in `path/to/my_knowledge_dataset` can then be used to finetune RAG as follows: ```bash python examples/research_projects/rag/finetune_rag.py \ --data_dir $DATA_DIR \ --output_dir $OUTPUT_DIR \ --model_name_or_path $MODEL_NAME_OR_PATH \ --model_type rag_sequence \ --fp16 \ --gpus 8 --index_name custom --passages_path path/to/data/my_knowledge_dataset --index_path path/to/my_knowledge_dataset_hnsw_index.faiss ```
huggingface/transformers/blob/main/examples/research_projects/rag/README.md
Load a dataset from the Hub Finding high-quality datasets that are reproducible and accessible can be difficult. One of 🤗 Datasets main goals is to provide a simple way to load a dataset of any format or type. The easiest way to get started is to discover an existing dataset on the [Hugging Face Hub](https://huggingface.co/datasets) - a community-driven collection of datasets for tasks in NLP, computer vision, and audio - and use 🤗 Datasets to download and generate the dataset. This tutorial uses the [rotten_tomatoes](https://huggingface.co/datasets/rotten_tomatoes) and [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) datasets, but feel free to load any dataset you want and follow along. Head over to the Hub now and find a dataset for your task! ## Load a dataset Before you take the time to download a dataset, it's often helpful to quickly get some general information about a dataset. A dataset's information is stored inside [`DatasetInfo`] and can include information such as the dataset description, features, and dataset size. Use the [`load_dataset_builder`] function to load a dataset builder and inspect a dataset's attributes without committing to downloading it: ```py >>> from datasets import load_dataset_builder >>> ds_builder = load_dataset_builder("rotten_tomatoes") # Inspect dataset description >>> ds_builder.info.description Movie Review Dataset. This is a dataset of containing 5,331 positive and 5,331 negative processed sentences from Rotten Tomatoes movie reviews. This data was first used in Bo Pang and Lillian Lee, ``Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales.'', Proceedings of the ACL, 2005. # Inspect dataset features >>> ds_builder.info.features {'label': ClassLabel(num_classes=2, names=['neg', 'pos'], id=None), 'text': Value(dtype='string', id=None)} ``` If you're happy with the dataset, then load it with [`load_dataset`]: ```py >>> from datasets import load_dataset >>> dataset = load_dataset("rotten_tomatoes", split="train") ``` ## Splits A split is a specific subset of a dataset like `train` and `test`. List a dataset's split names with the [`get_dataset_split_names`] function: ```py >>> from datasets import get_dataset_split_names >>> get_dataset_split_names("rotten_tomatoes") ['train', 'validation', 'test'] ``` Then you can load a specific split with the `split` parameter. Loading a dataset `split` returns a [`Dataset`] object: ```py >>> from datasets import load_dataset >>> dataset = load_dataset("rotten_tomatoes", split="train") >>> dataset Dataset({ features: ['text', 'label'], num_rows: 8530 }) ``` If you don't specify a `split`, 🤗 Datasets returns a [`DatasetDict`] object instead: ```py >>> from datasets import load_dataset >>> dataset = load_dataset("rotten_tomatoes") DatasetDict({ train: Dataset({ features: ['text', 'label'], num_rows: 8530 }) validation: Dataset({ features: ['text', 'label'], num_rows: 1066 }) test: Dataset({ features: ['text', 'label'], num_rows: 1066 }) }) ``` ## Configurations Some datasets contain several sub-datasets. For example, the [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) dataset has several sub-datasets, each one containing audio data in a different language. These sub-datasets are known as *configurations*, and you must explicitly select one when loading the dataset. If you don't provide a configuration name, 🤗 Datasets will raise a `ValueError` and remind you to choose a configuration. Use the [`get_dataset_config_names`] function to retrieve a list of all the possible configurations available to your dataset: ```py >>> from datasets import get_dataset_config_names >>> configs = get_dataset_config_names("PolyAI/minds14") >>> print(configs) ['cs-CZ', 'de-DE', 'en-AU', 'en-GB', 'en-US', 'es-ES', 'fr-FR', 'it-IT', 'ko-KR', 'nl-NL', 'pl-PL', 'pt-PT', 'ru-RU', 'zh-CN', 'all'] ``` Then load the configuration you want: ```py >>> from datasets import load_dataset >>> mindsFR = load_dataset("PolyAI/minds14", "fr-FR", split="train") ``` ## Remote code Certain datasets repositories contain a loading script with the Python code used to generate the dataset. Those datasets are generally exported to Parquet by Hugging Face, so that 🤗 Datasets can load the dataset fast and without running a loading script. Even if a Parquet export is not available, you can still use any dataset with Python code in its repository with `load_dataset`. All files and code uploaded to the Hub are scanned for malware (refer to the Hub security documentation for more information), but you should still review the dataset loading scripts and authors to avoid executing malicious code on your machine. You should set `trust_remote_code=True` to use a dataset with a loading script, or you will get a warning: ```py >>> from datasets import get_dataset_config_names, get_dataset_split_names, load_dataset >>> c4 = load_dataset("c4", "en", split="train", trust_remote_code=True) >>> get_dataset_config_names("c4", trust_remote_code=True) ['en', 'realnewslike', 'en.noblocklist', 'en.noclean'] >>> get_dataset_split_names("c4", "en", trust_remote_code=True) ['train', 'validation'] ``` <Tip warning=true> In the next major release, the new safety features of 🤗 Datasets will disable running dataset loading scripts by default, and you will have to pass `trust_remote_code=True` to load datasets that require running a dataset script. </Tip>
huggingface/datasets/blob/main/docs/source/load_hub.mdx
!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # Philosophy 🧨 Diffusers provides **state-of-the-art** pretrained diffusion models across multiple modalities. Its purpose is to serve as a **modular toolbox** for both inference and training. We aim at building a library that stands the test of time and therefore take API design very seriously. In a nutshell, Diffusers is built to be a natural extension of PyTorch. Therefore, most of our design choices are based on [PyTorch's Design Principles](https://pytorch.org/docs/stable/community/design.html#pytorch-design-philosophy). Let's go over the most important ones: ## Usability over Performance - While Diffusers has many built-in performance-enhancing features (see [Memory and Speed](https://huggingface.co/docs/diffusers/optimization/fp16)), models are always loaded with the highest precision and lowest optimization. Therefore, by default diffusion pipelines are always instantiated on CPU with float32 precision if not otherwise defined by the user. This ensures usability across different platforms and accelerators and means that no complex installations are required to run the library. - Diffusers aims to be a **light-weight** package and therefore has very few required dependencies, but many soft dependencies that can improve performance (such as `accelerate`, `safetensors`, `onnx`, etc...). We strive to keep the library as lightweight as possible so that it can be added without much concern as a dependency on other packages. - Diffusers prefers simple, self-explainable code over condensed, magic code. This means that short-hand code syntaxes such as lambda functions, and advanced PyTorch operators are often not desired. ## Simple over easy As PyTorch states, **explicit is better than implicit** and **simple is better than complex**. This design philosophy is reflected in multiple parts of the library: - We follow PyTorch's API with methods like [`DiffusionPipeline.to`](https://huggingface.co/docs/diffusers/main/en/api/diffusion_pipeline#diffusers.DiffusionPipeline.to) to let the user handle device management. - Raising concise error messages is preferred to silently correct erroneous input. Diffusers aims at teaching the user, rather than making the library as easy to use as possible. - Complex model vs. scheduler logic is exposed instead of magically handled inside. Schedulers/Samplers are separated from diffusion models with minimal dependencies on each other. This forces the user to write the unrolled denoising loop. However, the separation allows for easier debugging and gives the user more control over adapting the denoising process or switching out diffusion models or schedulers. - Separately trained components of the diffusion pipeline, *e.g.* the text encoder, the UNet, and the variational autoencoder, each has their own model class. This forces the user to handle the interaction between the different model components, and the serialization format separates the model components into different files. However, this allows for easier debugging and customization. DreamBooth or Textual Inversion training is very simple thanks to Diffusers' ability to separate single components of the diffusion pipeline. ## Tweakable, contributor-friendly over abstraction For large parts of the library, Diffusers adopts an important design principle of the [Transformers library](https://github.com/huggingface/transformers), which is to prefer copy-pasted code over hasty abstractions. This design principle is very opinionated and stands in stark contrast to popular design principles such as [Don't repeat yourself (DRY)](https://en.wikipedia.org/wiki/Don%27t_repeat_yourself). In short, just like Transformers does for modeling files, Diffusers prefers to keep an extremely low level of abstraction and very self-contained code for pipelines and schedulers. Functions, long code blocks, and even classes can be copied across multiple files which at first can look like a bad, sloppy design choice that makes the library unmaintainable. **However**, this design has proven to be extremely successful for Transformers and makes a lot of sense for community-driven, open-source machine learning libraries because: - Machine Learning is an extremely fast-moving field in which paradigms, model architectures, and algorithms are changing rapidly, which therefore makes it very difficult to define long-lasting code abstractions. - Machine Learning practitioners like to be able to quickly tweak existing code for ideation and research and therefore prefer self-contained code over one that contains many abstractions. - Open-source libraries rely on community contributions and therefore must build a library that is easy to contribute to. The more abstract the code, the more dependencies, the harder to read, and the harder to contribute to. Contributors simply stop contributing to very abstract libraries out of fear of breaking vital functionality. If contributing to a library cannot break other fundamental code, not only is it more inviting for potential new contributors, but it is also easier to review and contribute to multiple parts in parallel. At Hugging Face, we call this design the **single-file policy** which means that almost all of the code of a certain class should be written in a single, self-contained file. To read more about the philosophy, you can have a look at [this blog post](https://huggingface.co/blog/transformers-design-philosophy). In Diffusers, we follow this philosophy for both pipelines and schedulers, but only partly for diffusion models. The reason we don't follow this design fully for diffusion models is because almost all diffusion pipelines, such as [DDPM](https://huggingface.co/docs/diffusers/api/pipelines/ddpm), [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/overview#stable-diffusion-pipelines), [unCLIP (DALL·E 2)](https://huggingface.co/docs/diffusers/api/pipelines/unclip) and [Imagen](https://imagen.research.google/) all rely on the same diffusion model, the [UNet](https://huggingface.co/docs/diffusers/api/models/unet2d-cond). Great, now you should have generally understood why 🧨 Diffusers is designed the way it is 🤗. We try to apply these design principles consistently across the library. Nevertheless, there are some minor exceptions to the philosophy or some unlucky design choices. If you have feedback regarding the design, we would ❤️ to hear it [directly on GitHub](https://github.com/huggingface/diffusers/issues/new?assignees=&labels=&template=feedback.md&title=). ## Design Philosophy in Details Now, let's look a bit into the nitty-gritty details of the design philosophy. Diffusers essentially consists of three major classes: [pipelines](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines), [models](https://github.com/huggingface/diffusers/tree/main/src/diffusers/models), and [schedulers](https://github.com/huggingface/diffusers/tree/main/src/diffusers/schedulers). Let's walk through more detailed design decisions for each class. ### Pipelines Pipelines are designed to be easy to use (therefore do not follow [*Simple over easy*](#simple-over-easy) 100%), are not feature complete, and should loosely be seen as examples of how to use [models](#models) and [schedulers](#schedulers) for inference. The following design principles are followed: - Pipelines follow the single-file policy. All pipelines can be found in individual directories under src/diffusers/pipelines. One pipeline folder corresponds to one diffusion paper/project/release. Multiple pipeline files can be gathered in one pipeline folder, as it’s done for [`src/diffusers/pipelines/stable-diffusion`](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines/stable_diffusion). If pipelines share similar functionality, one can make use of the [#Copied from mechanism](https://github.com/huggingface/diffusers/blob/125d783076e5bd9785beb05367a2d2566843a271/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_img2img.py#L251). - Pipelines all inherit from [`DiffusionPipeline`]. - Every pipeline consists of different model and scheduler components, that are documented in the [`model_index.json` file](https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/model_index.json), are accessible under the same name as attributes of the pipeline and can be shared between pipelines with [`DiffusionPipeline.components`](https://huggingface.co/docs/diffusers/main/en/api/diffusion_pipeline#diffusers.DiffusionPipeline.components) function. - Every pipeline should be loadable via the [`DiffusionPipeline.from_pretrained`](https://huggingface.co/docs/diffusers/main/en/api/diffusion_pipeline#diffusers.DiffusionPipeline.from_pretrained) function. - Pipelines should be used **only** for inference. - Pipelines should be very readable, self-explanatory, and easy to tweak. - Pipelines should be designed to build on top of each other and be easy to integrate into higher-level APIs. - Pipelines are **not** intended to be feature-complete user interfaces. For future complete user interfaces one should rather have a look at [InvokeAI](https://github.com/invoke-ai/InvokeAI), [Diffuzers](https://github.com/abhishekkrthakur/diffuzers), and [lama-cleaner](https://github.com/Sanster/lama-cleaner). - Every pipeline should have one and only one way to run it via a `__call__` method. The naming of the `__call__` arguments should be shared across all pipelines. - Pipelines should be named after the task they are intended to solve. - In almost all cases, novel diffusion pipelines shall be implemented in a new pipeline folder/file. ### Models Models are designed as configurable toolboxes that are natural extensions of [PyTorch's Module class](https://pytorch.org/docs/stable/generated/torch.nn.Module.html). They only partly follow the **single-file policy**. The following design principles are followed: - Models correspond to **a type of model architecture**. *E.g.* the [`UNet2DConditionModel`] class is used for all UNet variations that expect 2D image inputs and are conditioned on some context. - All models can be found in [`src/diffusers/models`](https://github.com/huggingface/diffusers/tree/main/src/diffusers/models) and every model architecture shall be defined in its file, e.g. [`unet_2d_condition.py`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unet_2d_condition.py), [`transformer_2d.py`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformer_2d.py), etc... - Models **do not** follow the single-file policy and should make use of smaller model building blocks, such as [`attention.py`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention.py), [`resnet.py`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/resnet.py), [`embeddings.py`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/embeddings.py), etc... **Note**: This is in stark contrast to Transformers' modeling files and shows that models do not really follow the single-file policy. - Models intend to expose complexity, just like PyTorch's `Module` class, and give clear error messages. - Models all inherit from `ModelMixin` and `ConfigMixin`. - Models can be optimized for performance when it doesn’t demand major code changes, keep backward compatibility, and give significant memory or compute gain. - Models should by default have the highest precision and lowest performance setting. - To integrate new model checkpoints whose general architecture can be classified as an architecture that already exists in Diffusers, the existing model architecture shall be adapted to make it work with the new checkpoint. One should only create a new file if the model architecture is fundamentally different. - Models should be designed to be easily extendable to future changes. This can be achieved by limiting public function arguments, configuration arguments, and "foreseeing" future changes, *e.g.* it is usually better to add `string` "...type" arguments that can easily be extended to new future types instead of boolean `is_..._type` arguments. Only the minimum amount of changes shall be made to existing architectures to make a new model checkpoint work. - The model design is a difficult trade-off between keeping code readable and concise and supporting many model checkpoints. For most parts of the modeling code, classes shall be adapted for new model checkpoints, while there are some exceptions where it is preferred to add new classes to make sure the code is kept concise and readable long-term, such as [UNet blocks](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unet_2d_blocks.py) and [Attention processors](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py). ### Schedulers Schedulers are responsible to guide the denoising process for inference as well as to define a noise schedule for training. They are designed as individual classes with loadable configuration files and strongly follow the **single-file policy**. The following design principles are followed: - All schedulers are found in [`src/diffusers/schedulers`](https://github.com/huggingface/diffusers/tree/main/src/diffusers/schedulers). - Schedulers are **not** allowed to import from large utils files and shall be kept very self-contained. - One scheduler Python file corresponds to one scheduler algorithm (as might be defined in a paper). - If schedulers share similar functionalities, we can make use of the `#Copied from` mechanism. - Schedulers all inherit from `SchedulerMixin` and `ConfigMixin`. - Schedulers can be easily swapped out with the [`ConfigMixin.from_config`](https://huggingface.co/docs/diffusers/main/en/api/configuration#diffusers.ConfigMixin.from_config) method as explained in detail [here](./docs/source/en/using-diffusers/schedulers.md). - Every scheduler has to have a `set_num_inference_steps`, and a `step` function. `set_num_inference_steps(...)` has to be called before every denoising process, *i.e.* before `step(...)` is called. - Every scheduler exposes the timesteps to be "looped over" via a `timesteps` attribute, which is an array of timesteps the model will be called upon. - The `step(...)` function takes a predicted model output and the "current" sample (x_t) and returns the "previous", slightly more denoised sample (x_t-1). - Given the complexity of diffusion schedulers, the `step` function does not expose all the complexity and can be a bit of a "black box". - In almost all cases, novel schedulers shall be implemented in a new scheduling file.
huggingface/diffusers/blob/main/PHILOSOPHY.md
Gradio Demo: theme_new_step_1 ``` !pip install -q gradio ``` ``` import gradio as gr from gradio.themes.base import Base import time class Seafoam(Base): pass seafoam = Seafoam() with gr.Blocks(theme=seafoam) as demo: textbox = gr.Textbox(label="Name") slider = gr.Slider(label="Count", minimum=0, maximum=100, step=1) with gr.Row(): button = gr.Button("Submit", variant="primary") clear = gr.Button("Clear") output = gr.Textbox(label="Output") def repeat(name, count): time.sleep(3) return name * count button.click(repeat, [textbox, slider], output) if __name__ == "__main__": demo.launch() ```
gradio-app/gradio/blob/main/demo/theme_new_step_1/run.ipynb
hat happens inside the pipeline function? In this video, we will look at what actually happens when we use the pipeline function of the Transformers library. More specifically, we will look at the sentiment analysis pipeline, and how it went from the two following sentences to the positive labels with their respective scores. As we have seen in the pipeline presentation, there are three stages in the pipeline. First, we convert the raw texts to numbers the model can make sense of, using a tokenizer. Then, those numbers go through the model, which outputs logits. Finally, the post-processing steps transforms those logits into labels and scores. Let's look in detail at those three steps, and how to replicate them using the Transformers library, beginning with the first stage, tokenization. The tokenization process has several steps. First, the text is split into small chunks called tokens. They can be words, parts of words or punctuation symbols. Then the tokenizer will had some special tokens (if the model expect them). Here the model uses expects a CLS token at the beginning and a SEP token at the end of the sentence to classify. Lastly, the tokenizer matches each token to its unique ID in the vocabulary of the pretrained model. To load such a tokenizer, the Transformers library provides the AutoTokenizer API. The most important method of this class is from_pretrained, which will download and cache the configuration and the vocabulary associated to a given checkpoint. Here, the checkpoint used by default for the sentiment analysis pipeline is distilbert base uncased finetuned sst2 english. We instantiate a tokenizer associated with that checkpoint, then feed it the two sentences. Since those two sentences are not of the same size, we will need to pad the shortest one to be able to build an array. This is done by the tokenizer with the option padding=True. With truncation=True, we ensure that any sentence longer than the maximum the model can handle is truncated. Lastly, the return_tensors option tells the tokenizer to return a TensorFlow tensor. Looking at the result, we see we have a dictionary with two keys. Input IDs contains the IDs of both sentences, with 0s where the padding is applied. The second key, attention mask, indicates where padding has been applied, so the model does not pay attention to it. This is all what is inside the tokenization step. Now let's have a look at the second step, the model. As for the tokenizer, there is an TFAutoModel API, with a from_pretrained method. It will download and cache the configuration of the model as well as the pretrained weights. However, the TFAutoModel API will only instantiate the body of the model, that is, the part of the model that is left once the pretraining head is removed. It will output a high-dimensional tensor that is a representation of the sentences passed, but which is not directly useful for our classification problem. Here the tensor has two sentences, each of sixteen tokens and the last dimension is the hidden size of our model 768. To get an output linked to our classification problem, we need to use the TFAutoModelForSequenceClassification class. It works exactly as the AutoModel class, except that it will build a model with a classification head. There is one auto class for each common NLP task in the Transformers library. Here, after giving our model the two sentences, we get a tensor of size two by two: one result for each sentence and for each possible label. Those outputs are not probabilities yet (we can see they don't sum to 1). This is because each model of the Transformers library returns logits. To make sense of those logits, we need to dig into the third and last step of the pipeline: post-processing. To convert logits into probabilities, we need to apply a SoftMax layer to them. As we can see, this transforms them into positive numbers that sum up to 1. The last step is to know which of those corresponds to the positive or the negative label. This is given by the id2label field of the model config. The first probabilities (index 0) correspond to the negative label, and the seconds (index 1) correspond to the positive label. This is how our classifier built with the pipeline function picked those labels and computed those scores. Now that you know how each steps works, you can easily tweak them to your needs.
huggingface/course/blob/main/subtitles/en/raw/chapter2/02_inside-pipeline-tf.md