You are viewing v1.7.3 version.
A newer version
v1.23.3 is available.
🤗 Optimum notebooks
You can find here a list of the notebooks associated with each accelerator in 🤗 Optimum. |
Optimum Habana examples
Notebook | Description | Colab | Studio Lab |
---|---|---|---|
How to use DeepSpeed to train models with billions of parameters on Habana Gaudi | Show how to use DeepSpeed to pre-train/fine-tune the 1.6B-parameter GPT2-XL for causal language modeling on Habana Gaudi. |
Optimum Intel examples
Notebook | Description | Colab | Studio Lab |
---|---|---|---|
How to quantize a model with Intel Neural Compressor for text classification | Show how to apply static, dynamic and aware training quantization on a model using Intel Neural Compressor for any GLUE task. | ||
How to quantize a model with OpenVINO NNCF for question answering | Show how to apply post-training quantization on a question answering model using NNCF and to accelerate inference with OpenVINO |
Optimum ONNX Runtime examples
Notebook | Description | Colab | Studio Lab |
---|---|---|---|
How to quantize a model with ONNX Runtime for text classification | Show how to apply static and dynamic quantization on a model using ONNX Runtime for any GLUE task. | ||
How to fine-tune a model for text classification with ONNX Runtime | Show how to DistilBERT model on GLUE tasks using ONNX Runtime. | ||
How to fine-tune a model for summarization with ONNX Runtime | Show how to fine-tune a T5 model on the BBC news corpus. |
Optimum Graphcore examples
Notebook | Description | Colab |
---|---|---|
Introduction to Optimum Graphcore | Introduce Optimum-Graphcore with a BERT fine-tuning example. | |
Train an external model | Show how to train an external model that is not supported by Optimum or Transformers. | |
Train your language model | Show how to train a model for causal or masked language modelling from scratch. | |
How to fine-tune a model on text classification | Show how to preprocess the data and fine-tune a pretrained model on any GLUE task. | |
How to fine-tune a model on language modeling | Show how to preprocess the data and fine-tune a pretrained model on a causal or masked LM task. | |
How to fine-tune a model on token classification | Show how to preprocess the data and fine-tune a pretrained model on a token classification task (NER, PoS). | |
How to fine-tune a model on question answering | Show how to preprocess the data and fine-tune a pretrained model on SQUAD. | |
How to fine-tune a model on multiple choice | Show how to preprocess the data and fine-tune a pretrained model on SWAG. | |
How to fine-tune a model on translation | Show how to preprocess the data and fine-tune a pretrained model on WMT. | |
How to fine-tune a model on summarization | Show how to preprocess the data and fine-tune a pretrained model on XSUM. | |
How to fine-tune a model on audio classification | Show how to preprocess the data and fine-tune a pretrained Speech model on Keyword Spotting | |
How to fine-tune a model on image classfication | Show how to preprocess the data and fine-tune a pretrained model on image classification. | |
wav2vec 2.0 Inference on IPU | How to run inference on the wav2vec 2.0 model with PyTorch on the Graphcore IPU-POD16 system. |