Transformers documentation

πŸ€— Transformers Notebooks

You are viewing v4.19.3 version. A newer version v4.46.3 is available.
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

πŸ€— Transformers Notebooks

You can find here a list of the official notebooks provided by Hugging Face.

Also, we would like to list here interesting content created by the community. If you wrote some notebook(s) leveraging πŸ€— Transformers and would like be listed here, please open a Pull Request so it can be included under the Community notebooks.

Hugging Face's notebooks πŸ€—

Documentation notebooks

You can open any page of the documentation as a notebook in colab (there is a button directly on said pages) but they are also listed here if you need to:

Notebook Description
Quicktour of the library A presentation of the various APIs in Transformers Open in Colab Open in AWS Studio
Summary of the tasks How to run the models of the Transformers library task by task Open in Colab Open in AWS Studio
Preprocessing data How to use a tokenizer to preprocess your data Open in Colab Open in AWS Studio
Fine-tuning a pretrained model How to use the Trainer to fine-tune a pretrained model Open in Colab Open in AWS Studio
Summary of the tokenizers The differences between the tokenizers algorithm Open in Colab Open in AWS Studio
Multilingual models How to use the multilingual models of the library Open in Colab Open in AWS Studio

PyTorch Examples

Notebook Description
Train your tokenizer How to train and use your very own tokenizer Open in Colab Open in AWS Studio
Train your language model How to easily start using transformers Open in Colab Open in AWS Studio
How to fine-tune a model on text classification Show how to preprocess the data and fine-tune a pretrained model on any GLUE task. Open in Colab Open in AWS Studio
How to fine-tune a model on language modeling Show how to preprocess the data and fine-tune a pretrained model on a causal or masked LM task. Open in Colab Open in AWS Studio
How to fine-tune a model on token classification Show how to preprocess the data and fine-tune a pretrained model on a token classification task (NER, PoS). Open in Colab Open in AWS Studio
How to fine-tune a model on question answering Show how to preprocess the data and fine-tune a pretrained model on SQUAD. Open in Colab Open in AWS Studio
How to fine-tune a model on multiple choice Show how to preprocess the data and fine-tune a pretrained model on SWAG. Open in Colab Open in AWS Studio
How to fine-tune a model on translation Show how to preprocess the data and fine-tune a pretrained model on WMT. Open in Colab Open in AWS Studio
How to fine-tune a model on summarization Show how to preprocess the data and fine-tune a pretrained model on XSUM. Open in Colab Open in AWS Studio
How to fine-tune a speech recognition model in English Show how to preprocess the data and fine-tune a pretrained Speech model on TIMIT Open in Colab Open in AWS Studio
How to fine-tune a speech recognition model in any language Show how to preprocess the data and fine-tune a multi-lingually pretrained speech model on Common Voice Open in Colab Open in AWS Studio
How to fine-tune a model on audio classification Show how to preprocess the data and fine-tune a pretrained Speech model on Keyword Spotting Open in Colab Open in AWS Studio
How to train a language model from scratch Highlight all the steps to effectively train Transformer model on custom data Open in Colab Open in AWS Studio
How to generate text How to use different decoding methods for language generation with transformers Open in Colab Open in AWS Studio
How to generate text (with constraints) How to guide language generation with user-provided constraints Open in Colab Open in AWS Studio
How to export model to ONNX Highlight how to export and run inference workloads through ONNX
How to use Benchmarks How to benchmark models with transformers Open in Colab Open in AWS Studio
Reformer How Reformer pushes the limits of language modeling Open in Colab Open in AWS Studio
How to fine-tune a model on image classification Show how to preprocess the data and fine-tune any pretrained Vision model on Image Classification Open in Colab Open in AWS Studio

TensorFlow Examples

Notebook Description
Train your tokenizer How to train and use your very own tokenizer Open in Colab Open in AWS Studio
Train your language model How to easily start using transformers Open in Colab Open in AWS Studio
How to fine-tune a model on text classification Show how to preprocess the data and fine-tune a pretrained model on any GLUE task. Open in Colab Open in AWS Studio
How to fine-tune a model on language modeling Show how to preprocess the data and fine-tune a pretrained model on a causal or masked LM task. Open in Colab Open in AWS Studio
How to fine-tune a model on token classification Show how to preprocess the data and fine-tune a pretrained model on a token classification task (NER, PoS). Open in Colab Open in AWS Studio
How to fine-tune a model on question answering Show how to preprocess the data and fine-tune a pretrained model on SQUAD. Open in Colab Open in AWS Studio
How to fine-tune a model on multiple choice Show how to preprocess the data and fine-tune a pretrained model on SWAG. Open in Colab Open in AWS Studio
How to fine-tune a model on translation Show how to preprocess the data and fine-tune a pretrained model on WMT. Open in Colab Open in AWS Studio
How to fine-tune a model on summarization Show how to preprocess the data and fine-tune a pretrained model on XSUM. Open in Colab Open in AWS Studio

Optimum notebooks

πŸ€— Optimum is an extension of πŸ€— Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on targeted hardwares.

Notebook Description
How to quantize a model with ONNX Runtime for text classification Show how to apply static and dynamic quantization on a model using ONNX Runtime for any GLUE task. Open in Colab Open in AWS Studio
How to quantize a model with Intel Neural Compressor for text classification Show how to apply static, dynamic and aware training quantization on a model using Intel Neural Compressor (INC) for any GLUE task. Open in Colab Open in AWS Studio

Community notebooks:

More notebooks developed by the community are available here.