Optimum documentation

🤗 Optimum notebooks

You are viewing v1.6.1 version. A newer version v1.19.0 is available.
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

🤗 Optimum notebooks

You can find here a list of the notebooks associated with each accelerator in 🤗 Optimum.

Optimum Graphcore examples

Notebook Description
Introduction to Optimum Graphcore Introduce Optimum-Graphcore with a BERT fine-tuning example. Gradient
Train an external model Show how to train an external model that is not supported by Optimum or Transformers.
Train your language model Show how to train a model for causal or masked language modelling from scratch.
How to fine-tune a model on text classification Show how to preprocess the data and fine-tune a pretrained model on any GLUE task. Gradient
How to fine-tune a model on language modeling Show how to preprocess the data and fine-tune a pretrained model on a causal or masked LM task.
How to fine-tune a model on token classification Show how to preprocess the data and fine-tune a pretrained model on a token classification task (NER, PoS).
How to fine-tune a model on question answering Show how to preprocess the data and fine-tune a pretrained model on SQUAD.
How to fine-tune a model on multiple choice Show how to preprocess the data and fine-tune a pretrained model on SWAG.
How to fine-tune a model on translation Show how to preprocess the data and fine-tune a pretrained model on WMT.
How to fine-tune a model on summarization Show how to preprocess the data and fine-tune a pretrained model on XSUM.
How to fine-tune a model on audio classification Show how to preprocess the data and fine-tune a pretrained Speech model on Keyword Spotting
How to fine-tune a model on image classfication Show how to preprocess the data and fine-tune a pretrained model on image classification. Gradient
wav2vec 2.0 Fine-Tuning on IPU How to fine-tune a pre-trained wav2vec 2.0 model with PyTorch on the Graphcore IPU-POD16 system.
wav2vec 2.0 Inference on IPU How to run inference on the wav2vec 2.0 model with PyTorch on the Graphcore IPU-POD16 system.

Optimum Habana examples

Notebook Description
How to use DeepSpeed to train models with billions of parameters on Habana Gaudi Show how to use DeepSpeed to pre-train/fine-tune the 1.6B-parameter GPT2-XL for causal language modeling on Habana Gaudi.

Optimum Intel examples

Notebook Description
How to quantize a model with Intel Neural Compressor for text classification Show how to apply static, dynamic and aware training quantization on a model using Intel Neural Compressor (INC) for any GLUE task. Open in Colab Open in AWS Studio

ONNX Runtime examples

Notebook Description
How to quantize a model with ONNX Runtime for text classification Show how to apply static and dynamic quantization on a model using ONNX Runtime for any GLUE task. Open in Colab Open in AWS Studio
How to fine-tune a model for text classification with ONNX Runtime Show how to DistilBERT model on GLUE tasks using ONNX Runtime. Open in Colab Open in AWS Studio
How to fine-tune a model for summarization with ONNX Runtime Show how to fine-tune a T5 model on the BBC news corpus. Open in Colab Open in AWS Studio