title
stringlengths 34
95
| url
stringlengths 39
106
| date
stringlengths 10
10
| tags
sequence | summary
stringlengths 66
380
| content
stringlengths 4.93k
25.2k
|
---|---|---|---|---|---|
Google Colab the free GPU/TPU Jupyter Notebook Service | https://www.philschmid.de/google-cola-the-free-gpu-jupyter | 2020-02-26 | [
"Machine Learning"
] | A Short Introduction to Google Colab as a free Jupyter notebook service from Google. Learn how to use Accelerated Hardware like GPUs and TPUs to run your Machine learning completely for free in the cloud. | ## What is Google Colab
**Google Colaboratory** or „Colab“ for short is a free Jupyter notebook service from Google. It requires no setup and
runs entirely in the cloud. In Google Colab you can write, execute, save and share your Jupiter Notebooks. You access
powerful computing resources like TPUs and GPUs all for free through your browser. All major Python libraries like
[Tensorflow](https://www.tensorflow.org/), [Scikit-learn](https://scikit-learn.org/), [PyTorch](https://pytorch.org/),
[Pandas](https://pandas.pydata.org/), etc. are pre-installed. Google Colab requires no configuration, you only need a
Google Account and then you are good to go. Your notebooks are stored in your [Google Drive](https://drive.google.com/),
or can be loaded from [GitHub](https://github.com/). Colab notebooks can be shared just as you would with Google Docs or
Sheets. Simply click the Share button at the top right of any Colab notebook, or follow these Google Drive
[file sharing instructions](https://support.google.com/drive/answer/2494822?co=GENIE.Platform%3DDesktop&hl=en).
For more informations you can look into the official FAQ from Google Research. You can find the FAQ under
[Colaboratory – Google](https://research.google.com/colaboratory/faq.html) or you can have a look at the introduction
video [Get started with Google Colaboratory (Coding TensorFlow) - YouTube](https://www.youtube.com/watch?v=inN8seMm7UI)
## Is it free?
**Yes, it is completely free of charge** you only need a Google account, which probably all of you have. You can use the
CPU-, GPU- & TPU-Runtime completely for free. Google also offer in some cases the opportunity to extend the runtime into
one with 25GB of memory completely for free.
Recently Google Introduced „Colab Pro“ which is a paid version for \$9.99/month. With „Colab Pro“ you have prior access
to GPU and TPUs and also higher memory. You can be up to 24 hours connected to your notebooks in comparison in the free
version the connection limit is 12h per day. For more information read here:
[Google Colab Pro](https://colab.research.google.com/signup?utm_source=faq&utm_medium=link&utm_campaign=why_arent_resources_guaranteed).
## Ressources and Runtimes
| Type | Size |
| ------ | ------------------------------------- |
| CPU | 2x |
| Memory | 12.52GB |
| GPU | T4 with 7,98GB or K80 with 11,97GB |
| TPUv2 | 8units |
| Disk | at least 25GB will increase as needed |
## How to use accelerated hardware
Changing hardware runtime is as easy as it could get. You just have to navigate from „Runtime“ -> „change runtime type“
and select your preferred accelerated hardware type GPU or TPU.
![change-runtime](/static/blog/google-cola-the-free-gpu-jupyter/change-runtime.gif)
## How to get started
In the following section, I will describe and show some of the best features Google Colab has to offer. I created a
[Colab Notebook](https://colab.research.google.com/drive/1nwJ0BQjZACbGbt-AfG93LjJNT05mp4gw) with all of the features for
you to lookup.
### Creating a Colab Notebook
You can create a Colab notebook directly in the [Colab Environment](https://colab.research.google.com/) or through your
Google Drive.
### Access your google drive storage in your Colab notebook by mounting it
If you want to mount your Google Drive to your notebook you simply have to execute the snippet below. After you executed
it, you will see a URL where you have to login to your Google account and authorize Google Colab to access your Drive
Storage. Afterward, you can copy the key from the link into the displayed input-field in the Colab notebook.
```python
from google.colab import drive
drive.mount('/content/drive/')
```
You can show your files with `!ls /content/drive` or use the navigation on the left side.
### Upload and Download local files to your Colab notebook
You can easily upload and download files from your local machine by executing `files.upload()`, which creates a HTML
file input-field and `files.download()`.
#### Upload
```python
from google.colab import files
uploaded = files.upload()
```
#### Dowlnload
```python
from google.colab import files
files.download("File Name")
```
##### Download a complete folder by zipping it
```python
from google.colab import files
import zipfile
import sys
foldername = "test_folder"
zipfile.ZipFile(f'{foldername}.zip', 'w', zipfile.ZIP_DEFLATED)
files.download(f'{foldername}.zip')
```
### Change your directory permanently
You can change your directory permanently from `/content` to something you like by executing `%cd path` in a cell. This
is very useful if you clone your git repository into your colab notebook.
```python
%cd path
```
### Show an image in Colab
You can show pictures inline as you do it in jupyter with this simple snippet
```python
from IPython.display import Image, display
display(Image('image.jpg'))
```
### Advanced Pandas table
Google Colab offers an improved view of data frames in addition to the normal, familiar jupyter notebook view, in where
you can filter columns directly without using python. You can even search for a range in date fields. You can use it by
executing one line of code.
```python
%load_ext google.colab.data_table
```
![pandas-extended-view](/static/blog/google-cola-the-free-gpu-jupyter/pandas-extended-view.jpg)
### How to use git in Colab
Google Colab provides a lot of benefits, but one downside is you have to save your notebook to your google drive.
Normally you use some kind of git tool. The easiest way to overcome this problem is either by copying your notebook from
GitHub into your Colab Environment with this
[easy copy integration for notebooks](https://colab.research.google.com/github/) or you can use CLI commands to load
your private and public repository into your git. The only problem with using GitHub Repositories in your Colab is you
cannot push back to your Repository, you have to save it manually („File“ -> „save a Copy as Github Gist“ or „Save a
copy on Github“). If you want to integrate your repository directly you have to set up git in your Colab environment
like you normally do on your local machine. Chella wrote an article in Towards Data Science on how to do it.
[From Git to Colab, via SSH - Towards Data Science](https://towardsdatascience.com/using-git-with-colab-via-ssh-175e8f3751ec)
```
# git clone private repository
!git clone https://username:password@github.com/username/repository.git
# git clone public repository
!git clone https://github.com/fastai/courses.git
```
An extra tip is after you cloned your repository, you can permanently change directory to the repository by executing
`%cd /content/your_repostiory_name`. After that, every cell will be executed in the directory of your repository.
### Execute CLI commands
You can execute CLI commands for example, to install python package, update python package or run bash scripts just by
putting `!` before the command.
```
!pip install fastai
```
### Customize Shortcuts and changing Theme
You can customize Shortcuts by navigating from "Tools“ -> „Keyboard shortcuts…“ or if you want to change your theme you
must navigate from „Tools“ -> „Settings“ and under „Site“ you can change it.
![customizing_shortcuts_and_changing_theme](/static/blog/google-cola-the-free-gpu-jupyter/customizing_shortcuts_and_changing_theme.gif)
---
Thanks for reading my first blog post about Google Colaboratory.
See you soon 😊 |
Hugging Face Transformers Examples | https://www.philschmid.de/huggingface-transformers-examples | 2023-01-26 | [
"HuggingFace",
"Transformers",
"BERT",
"PyTorch"
] | Learn how to leverage Hugging Face Transformers to easily fine-tune your models. | <html class="max-w-none pt-6 pb-8 font-serif " itemscope itemtype="https://schema.org/FAQPage">
Machine learning and the adoption of the Transformer architecture are rapidly growing and will revolutionize the way we live and work. From self-driving cars to personalized medicine, the applications of [Transformers](https://huggingface.co/docs/transformers/index) are limitless. In this blog post, we'll explore how the leverage and explore examples for [Hugging Face Transformers](https://huggingface.co/docs/transformers/index) from natural language processing to computer vision. Whether you're new to [Hugging Face Transformers](https://huggingface.co/docs/transformers/index) or an expert, this post is sure to provide valuable insights and inspiration.
We will learn about the following:
1. [What is Hugging Face Transformers?](#what-is-hugging-face-transformers)
2. [What are Transformers’ examples?](#what-are-transformers-examples)
3. [How to use Transformers examples?](#how-to-use-transformers-examples)
4. [How to use your own data?](#how-to-use-your-own-data)
<div itemscope itemprop="mainEntity" itemtype="https://schema.org/Question">
<h2 itemprop="name" id="what-is-hugging-face-transformers">What is Hugging Face Transformers?</h2>
<div itemscope itemprop="acceptedAnswer" itemtype="https://schema.org/Answer">
<div itemprop="text">
[Hugging Face Transformers](https://huggingface.co/docs/transformers/index) is a Python library of pre-trained state-of-the-art machine learning models for natural language processing, computer vision, speech, or multi-modalities. [Transformers](https://huggingface.co/docs/transformers/index) provides access to popular Transformer architecture, including BERT, GPT2, RoBERTa, VIT, Whisper, Wav2vec2, T5, LayoutLM, and CLIP. These models support common tasks in different modalities, such as:
📝 Natural Language Processing: text classification, named entity recognition, question answering, language modeling, summarization, translation, multiple choice, and text generation.
🖼️ Computer Vision: image classification, object detection, and segmentation.
🗣️ Audio: automatic speech recognition and audio classification.
🐙 Multimodal: table question answering, optical character recognition, information extraction from scanned documents, video classification, and visual question answering.
The library can be used with the PyTorch, TensorFlow, or Jax framework and allows users to easily fine-tune or use the pre-trained models on their own data. If you are new to Hugging Face [Transformers](https://huggingface.co/docs/transformers/index), check out the completely free Hugging face course at: [https://huggingface.co/course](https://huggingface.co/course/chapter1/1)
</div>
</div>
</div>
<div itemscope itemprop="mainEntity" itemtype="https://schema.org/Question">
<h2 itemprop="name" id="what-are-transformers-examples">What are Transformers’ examples?</h2>
<div itemscope itemprop="acceptedAnswer" itemtype="https://schema.org/Answer">
<div itemprop="text">
As we know, [Transformers](https://huggingface.co/docs/transformers/index) can be used to fine-tune models like BERT, but did you know that the [GitHub repository](https://github.com/huggingface/transformers) of transformers provides over 20 ready-to-use [examples](https://github.com/huggingface/transformers/tree/main/examples)?
[Hugging Face Transformers](https://huggingface.co/docs/transformers/index) examples are maintained Python scripts to fine-tune or pre-train [Transformers](https://huggingface.co/docs/transformers/index) models. Currently, there are examples available for:
- [Audio Classification](https://github.com/huggingface/transformers/tree/main/examples/pytorch/audio-classification)
- [Contrastive Image Text](https://github.com/huggingface/transformers/tree/main/examples/pytorch/contrastive-image-text)
- [Image Classification](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification)
- [Image Pretraining](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-pretraining)
- [Language Modeling](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling)
- [Multiple Choice](https://github.com/huggingface/transformers/tree/main/examples/pytorch/multiple-choice)
- [Question Answering](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering)
- [Semantic Segmentation](https://github.com/huggingface/transformers/tree/main/examples/pytorch/semantic-segmentation)
- [Speech Pretraining](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-pretraining)
- [Speech Recognition](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition)
- [Summarization](https://github.com/huggingface/transformers/tree/main/examples/pytorch/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification)
- [Text Generation](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-generation)
- [Token Classification](https://github.com/huggingface/transformers/tree/main/examples/pytorch/token-classification)
- [Translation](https://github.com/huggingface/transformers/tree/main/examples/pytorch/translation)
Example scripts can be used [TensorFlow](https://github.com/huggingface/transformers/tree/main/examples/tensorflow), [PyTorch](https://github.com/huggingface/transformers/tree/main/examples/pytorch), or [JAX/Flax](https://github.com/huggingface/transformers/tree/main/examples/flax). As the name "examples" suggests, these are examples to help [Transformers](https://huggingface.co/docs/transformers/index) users to get started quickly, serve as an inspiration and help to create your own scripts, and enable users to run tests.
</div>
</div>
</div>
<div itemscope itemprop="mainEntity" itemtype="https://schema.org/Question">
<h2 itemprop="name" id="how-to-use-transformers-examples">How to use Transformers examples?</h2>
<div itemscope itemprop="acceptedAnswer" itemtype="https://schema.org/Answer">
<div itemprop="text">
Each release of [Transformers](https://huggingface.co/docs/transformers/index) has its own set of examples script, which are tested and maintained. This is important to keep in mind when using `examples/` since if you try to run an example from, e.g. a newer version than the `transformers` version you have installed it might fail. All examples provide documentation in the repository with a README, which includes documentation about the feature of the example and which arguments are supported. All `examples` provide an identical set of arguments to make it easy for users to switch between tasks. Now, let's get started.
### 1. Setup Development Environment
Our first step is to install the Hugging Face Libraries, including `transformers` and `datasets`. The version of `transformers` we install will be the version of the examples we are going to use. If you have `transformers` already installed, you need to check your version.
```bash
pip install torch
pip install "transformers==4.25.1" datasets --upgrade
```
### 2. Download the example script
The example scripts are stored in the [GitHub repository](https://github.com/huggingface/transformers) of transformers. This means we need first to clone the repository and then checkout the release of the `transformers` version we have installed in step 1 (for us, `4.25.1`)
```bash
git clone https://github.com/huggingface/transformers
cd transformers
git checkout tags/v4.25.1 # change 4.25.1 to your version if different
```
### 3. Fine-tune BERT for text-classification
Before we can run our script we first need to define the arguments we want to use. For `text-classification` we need at least a `model_name_or_path` which can be any supported architecture from the [Hugging Face Hub](https://huggingface.co) or a local path to a `transformers` model. Additional parameter we will use are:
- `dataset_name` : an ID for a dataset hosted on the [Hugging Face Hub](https://huggingface.co/datasets)
- `do_train` & `do_eval`: to train and evaluate our model
- `num_train_epochs`: the number of epochs we use for training.
- `per_device_train_batch_size`: the batch size used during training per GPU
- `output_dir`: where our trained model and logs will be saved
You can find a full list of supported parameter in the [script](https://github.com/huggingface/transformers/blob/6f3faf3863defe394e566c57b7d1ad3928c4ef49/examples/pytorch/text-classification/run_glue.py#L71). Before we can run our script we have to make sure all dependencies needed for the example are installed. Every example script which requires additional dependencies then `transformers` and `datasets` provides a `requirements.txt` in the directory, which can try to install.
```bash
pip install -r examples/pytorch/text-classification/requirements.txt
```
Thats it, now we can run our script from a CLI, which will start training BERT for `text-classification` on the `emotion` dataset.
```bash
python3 examples/pytorch/text-classification/run_glue.py \
--model_name_or_path bert-base-cased \
--dataset_name emotion \
--do_train \
--do_eval \
--per_device_train_batch_size 32 \
--num_train_epochs 3 \
--output_dir /bert-test
```
### 4. Fine-tune BART for summarization
In 3. we learnt how easy it is to leverage the `examples` fine-tun a BERT model for `text-classification`. In this section we show you how easy it to switch between different tasks. We will now fine-tune BART for summarization on the [CNN dailymail dataset](https://huggingface.co/datasets/cnn_dailymail). We will provide the same arguments than for `text-classification`, but extend it with:
- `dataset_config_name` to use a specific version of the dataset
- `text_column` the field in our dataset, which holds the text we want to summarize
- `summary_column` the field in our dataset, which holds the summary we want to learn.
Every example script which requires additional dependencies then `transformers` and `datasets` provides a `requirements.txt` in the directory, which can try to install.
```bash
pip install -r examples/pytorch/summarization/requirements.txt
```
Thats it, now we can run our script from a CLI, which will start training BERT for `text-classification` on the `emotion` dataset.
```bash
python3 examples/pytorch/summarization/run_summarization.py \
--model_name_or_path facebook/bart-base \
--dataset_name cnn_dailymail \
--dataset_config_name "3.0.0" \
--text_column "article" \
--summary_column "highlights" \
--do_train \
--do_eval \
--per_device_train_batch_size 32 \
--num_train_epochs 3 \
--output_dir /bert-test
```
</div>
</div>
</div>
<div itemscope itemprop="mainEntity" itemtype="https://schema.org/Question">
<h2 itemprop="name" id="how-to-use-your-own-data">How to use your own data?</h2>
<div itemscope itemprop="acceptedAnswer" itemtype="https://schema.org/Answer">
<div itemprop="text">
In the previous section we learned how to use Transformers examples with using datasets available on the [Hugging Face Hub](https://huggingface.co/datasets), but that's not all you can do. Hugging Face Transformers example support for local CSV and JSON files you can use for training your models. In this section we see how we can use a local CSV file using our `text-classification`example.
This section assumes that you completed step 1 & 2 from the “How to use Transformers examples?” section.
To be able to use local data files we will provide the same arguments than for `text-classification`, but extend it with:
- `train_file` path pointing to a local CSV or JSONLINES file with your training data
- `validation_file`path pointing to a local CSV or JSONLINES file with your training data
Both file should have the fields `text` which includes our data and `label` which holds the class label for the `text`.
```bash
pip install -r examples/pytorch/text-classification/requirements.txt
```
Thats it, now we can run our script from a CLI, which will start training BERT for `text-classification` on the `emotion` dataset.
```bash
python3 examples/pytorch/text-classification/run_glue.py \
--model_name_or_path bert-base-cased \
--train_file local/path/train.csv \
--validation_filepath local/path/eval.csv \
--do_train \
--do_eval \
--per_device_train_batch_size 32 \
--num_train_epochs 3 \
--output_dir /bert-test
```
</div>
</div>
</div>
</html>
---
Thanks for reading! If you have any questions, feel free to contact me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
BERT Text Classification in a different language | https://www.philschmid.de/bert-text-classification-in-a-different-language | 2020-05-22 | [
"NLP",
"Bert",
"HuggingFace"
] | Build a non-English (German) BERT multi-class text classification model with HuggingFace and Simple Transformers. | Currently, we have 7.5 billion people living on the world in around 200 nations. Only
[1.2 billion people of them are native English speakers](https://en.wikipedia.org/wiki/List_of_countries_by_English-speaking_population).
This leads to a lot of unstructured non-English textual data.
Most of the tutorials and blog posts demonstrate how to build text classification, sentiment analysis,
question-answering, or text generation models with BERT based architectures in English. In order to overcome this
missing, I am going to show you how to build a non-English multi-class text classification model.
![native-english-map](/static/blog/bert-text-classification-in-a-different-language/map.png)
Opening my article let me guess it’s safe to assume that you have heard of BERT. If you haven’t, or if you’d like a
refresh, I recommend reading this [paper](https://arxiv.org/pdf/1810.04805.pdf).
In deep learning, there are currently two options for how to build language models. You can build either monolingual
models or multilingual models.
> "multilingual, or not multilingual, that is the question" - as Shakespeare would have said
Multilingual models describe machine learning models that can understand different languages. An example of a
multilingual model is [mBERT](https://storage.googleapis.com/bert_models/2018_11_23/multi_cased_L-12_H-768_A-12.zip)
from Google research.
[This model supports and understands 104 languages.](https://github.com/google-research/bert/blob/master/multilingual.md)
Monolingual models, as the name suggest can understand one language.
Multilingual models are already achieving good results on certain tasks. But these models are bigger, need more data,
and also more time to be trained. These properties lead to higher costs due to the larger amount of data and time
resources needed.
Due to this fact, I am going to show you how to train a monolingual non-English BERT-based multi-class text
classification model. Wow, that was a long sentence!
![meme](/static/blog/bert-text-classification-in-a-different-language/meme.png)
---
## Tutorial
We are going to use [Simple Transformers](https://github.com/ThilinaRajapakse/simpletransformers) - an NLP library based
on the [Transformers](https://github.com/huggingface/transformers) library by HuggingFace. Simple Transformers allows us
to fine-tune Transformer models in a few lines of code.
As the dataset, we are going to use the [Germeval 2019](https://projects.fzai.h-da.de/iggsa/projekt/), which consists of
German tweets. We are going to detect and classify abusive language tweets. These tweets are categorized in 4 classes:
`PROFANITY`, `INSULT`, `ABUSE`, and `OTHERS`. The highest score achieved on this dataset is `0.7361`.
### We are going to:
- install Simple Transformers library
- select a pre-trained monolingual model
- load the dataset
- train/fine-tune our model
- evaluate the results of training
- save the trained model
- load the model and predict a real example
I am using Google Colab with a GPU runtime for this tutorial. If you are not sure how to use a GPU Runtime take a look
[here](https://www.philschmid.de/google-colab-the-free-gpu-tpu-jupyter-notebook-service).
---
## Install Simple Transformers library
First, we install `simpletransformers` with pip. If you are not using Google colab you can check out the installation
guide [here](https://github.com/ThilinaRajapakse/simpletransformers).
```python
# install simpletransformers
!pip install simpletransformers
# check installed version
!pip freeze | grep simpletransformers
# simpletransformers==0.28.2
```
---
## Select a pre-trained monolingual model
Next, we select the pre-trained model. As mentioned above the Simple Transformers library is based on the Transformers
library from HuggingFace. This enables us to use every pre-trained model provided in the
[Transformers library](https://huggingface.co/transformers/pretrained_models.html) and all community-uploaded models.
For a list that includes all community-uploaded models, I refer to
[https://huggingface.co/models](https://huggingface.co/models).
We are going to use the `distilbert-base-german-cased` model, a
[smaller, faster, cheaper version of BERT](https://huggingface.co/transformers/model_doc/distilbert.html). It uses 40%
less parameters than `bert-base-uncased` and runs 60% faster while still preserving over 95% of Bert’s performance.
---
## Load the dataset
The dataset is stored in two text files we can retrieve from the
[competition page](https://projects.fzai.h-da.de/iggsa/). One option to download them is using 2 simple `wget` CLI
commands.
```python
!wget https://projects.fzai.h-da.de/iggsa/wp-content/uploads/2019/08/germeval2019GoldLabelsSubtask1_2.txt
!wget https://projects.fzai.h-da.de/iggsa/wp-content/uploads/2019/09/germeval2019.training_subtask1_2_korrigiert.txt
```
Afterward, we use some `pandas` magic to create a dataframe.
```python
import pandas as pd
class_list = ['INSULT','ABUSE','PROFANITY','OTHER']
df1 = pd.read_csv('germeval2019GoldLabelsSubtask1_2.txt',sep='\t', lineterminator='\n',encoding='utf8',names=["tweet", "task1", "task2"])
df2 = pd.read_csv('germeval2019.training_subtask1_2_korrigiert.txt',sep='\t', lineterminator='\n',encoding='utf8',names=["tweet", "task1", "task2"])
df = pd.concat([df1,df2])
df['task2'] = df['task2'].str.replace('\r', "")
df['pred_class'] = df.apply(lambda x: class_list.index(x['task2']),axis=1)
df = df[['tweet','pred_class']]
print(df.shape)
df.head()
```
Since we don't have a test dataset, we split our dataset — `train_df` and `test_df`. We use 90% of the data for training
(`train_df`) and 10% for testing (`test_df`).
```python
from sklearn.model_selection import train_test_split
train_df, test_df = train_test_split(df, test_size=0.10)
print('train shape: ',train_df.shape)
print('test shape: ',test_df.shape)
# train shape: (6309, 2)
# test shape: (702, 2)
```
---
## Load pre-trained model
The next step is to load the pre-trained model. We do this by creating a `ClassificationModel` instance called `model`.
This instance takes the parameters of:
- the architecture (in our case `"bert"`)
- the pre-trained model (`"distilbert-base-german-cased"`)
- the number of class labels (`4`)
- and our hyperparameter for training (`train_args`).
You can configure the hyperparameter mwithin a wide range of possibilities. For a detailed description of each
attribute, please refer to the
[documentation](https://simpletransformers.ai/docs/usage/#configuring-a-simple-transformers-model).
```python
from simpletransformers.classification import ClassificationModel
# define hyperparameter
train_args ={"reprocess_input_data": True,
"fp16":False,
"num_train_epochs": 4}
# Create a ClassificationModel
model = ClassificationModel(
"bert", "distilbert-base-german-cased",
num_labels=4,
args=train_args
)
```
---
## Train/fine-tune our model
To train our model we only need to run `model.train_model()` and specify which dataset to train on.
```python
model.train_model(train_df)
```
---
## Evaluate the results of training
After we trained our model successfully we can evaluate it. Therefore we create a simple helper function
`f1_multiclass()`, which is used to calculate the `f1_score`. The `f1_score` is a measure for model accuracy. More on
that [here](https://en.wikipedia.org/wiki/F1_score).
```python
from sklearn.metrics import f1_score, accuracy_score
def f1_multiclass(labels, preds):
return f1_score(labels, preds, average='micro')
result, model_outputs, wrong_predictions = model.eval_model(test_df, f1=f1_multiclass, acc=accuracy_score)
# {'acc': 0.6894586894586895,
# 'eval_loss': 0.8673831869594075,
# 'f1': 0.6894586894586895,
# 'mcc': 0.25262380289641617}
```
We achieved an `f1_score` of `0.6895`. Initially, this seems rather low, but keep in mind: the highest submission at
[Germeval 2019](https://projects.fzai.h-da.de/iggsa/submissions/) was `0.7361`. We would have achieved a top 20 rank
without tuning the hyperparameter. This is pretty impressive!
In a future post, I am going to show you how to achieve a higher `f1_score` by tuning the hyperparameters.
---
## Save the trained model
Simple Transformers saves the `model` automatically every `2000` steps and at the end of the training process. The
default directory is `outputs/`. But the `output_dir` is a hyperparameter and can be overwritten. I created a helper
function `pack_model()`, which we use to `pack` all required model files into a `tar.gz`file for deployment.
```python
import os
import tarfile
def pack_model(model_path='',file_name=''):
files = [files for root, dirs, files in os.walk(model_path)][0]
with tarfile.open(file_name+ '.tar.gz', 'w:gz') as f:
for file in files:
f.add(f'{model_path}/{file}')
# run the function
pack_model('output_path','model_name')
```
---
## Load the model and predict a real example
As a final step, we load and predict a real example. Since we packed our files a step earlier with `pack_model()`, we
have to `unpack` them first. Therefore I wrote another helper function `unpack_model()` to unpack our model files.
```python
import os
import tarfile
def unpack_model(model_name=''):
tar = tarfile.open(f"{model_name}.tar.gz", "r:gz")
tar.extractall()
tar.close()
unpack_model('model_name')
```
To load a saved model, we only need to provide the `path` to our saved files and initialize it the same way as we did it
in the training step. _Note: you will need to specify the correct (usually the same used in training) args when loading
the model._
```python
from simpletransformers.classification import ClassificationModel
# define hyperparameter
train_args ={"reprocess_input_data": True,
"fp16":False,
"num_train_epochs": 4}
# Create a ClassificationModel with our trained model
model = ClassificationModel(
"bert", 'path_to_model/',
num_labels=4,
args=train_args
)
```
After initializing it we can use the `model.predict()` function to classify an output with a given input. In this
example, we take a tweet from the Germeval 2018 dataset.
```python
class_list = ['INSULT','ABUSE','PROFANITY','OTHER']
test_tweet1 = "Meine Mutter hat mir erzählt, dass mein Vater einen Wahlkreiskandidaten nicht gewählt hat, weil der gegen die Homo-Ehe ist"
predictions, raw_outputs = model.predict([test_tweet1])
print(class_list[predictions[0]])
# OTHER
test_tweet2 = "Frau #Böttinger meine Meinung dazu ist sie sollten uns mit ihrem Pferdegebiss nicht weiter belästigen #WDR"
predictions, raw_outputs = model.predict([test_tweet2])
print(class_list[predictions[0]])
# INSULT
```
Our model predicted the correct class `OTHER` and `INSULT`.
---
## Resume
Concluding, we can say we achieved our goal to create a non-English BERT-based text classification model.
Our example referred to the German language but can easily be transferred into another language. HuggingFace offers a
lot of pre-trained models for languages like French, Spanish, Italian, Russian, Chinese, ...
---
Thanks for reading. You can find the colab notebook with the complete code
[here](https://colab.research.google.com/drive/1kAlGGGsZaFaFoL0lZ0HK4xUR6QS8gipn#scrollTo=JG2gN7KUqyjY).
If you have any questions, feel free to contact me. |
Semantic Segmantion with Hugging Face's Transformers & Amazon SageMaker | https://www.philschmid.de/image-segmentation-sagemaker | 2022-05-03 | [
"AWS",
"SegFormer",
"Vision",
"Sagemaker"
] | Learn how to do image segmentation with Hugging Face Transformers, SegFormer and Amazon SageMaker. | Transformer models are changing the world of machine learning, starting with natural language processing, and now, with audio and computer vision. Hugging Face's mission is to democratize good machine learning and giving any one the opportunity to use these new state-of-the-art machine learning models.
Together with Amazon SageMaker and AWS we have been working on extending the functionalities of the Hugging Face Inference DLC and the Python SageMaker SDK to make it easier to use speech and vision models together with `transformers`.
You can now use the Hugging Face Inference DLC to do [automatic speech recognition](https://huggingface.co/tasks/automatic-speech-recognition) using MetaAIs [wav2vec2](https://arxiv.org/abs/2006.11477) model or Microsofts [WavLM](https://arxiv.org/abs/2110.13900) or use NVIDIAs [SegFormer](https://arxiv.org/abs/2105.15203) for [image segmentation](https://huggingface.co/tasks/image-segmentation).
This guide will walk you through how to do [Image Segmentation](https://huggingface.co/tasks/image-segmentation) using [segformer](https://huggingface.co/nvidia/segformer-b0-finetuned-ade-512-512) and new `DataSerializer`.
![overview](/static/blog/image-segmentation-sagemaker/semantic_segmentation.png)
In this example you will learn how to:
1. Setup a development Environment and permissions for deploying Amazon SageMaker Inference Endpoints.
2. Deploy a segformer model to Amazon SageMaker for image segmentation
3. Send requests to the endpoint to do image segmentation.
Let's get started! 🚀
---
_If you are going to use Sagemaker in a local environment (not SageMaker Studio or Notebook Instances). You need access to an IAM Role with the required permissions for Sagemaker. You can find [here](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) more about it._
## 1. Setup a development Environment and permissions for deploying Amazon SageMaker Inference Endpoints.
Setting up the development environment and permissions needs to be done for the automatic-speech-recognition example and the semantic-segmentation example. First we update the `sagemaker` SDK to make sure we have new `DataSerializer`.
```python
!pip install sagemaker segmentation-mask-overlay pillow matplotlib --upgrade
import sagemaker
assert sagemaker.__version__ >= "2.86.0"
```
After we have update the SDK we can set the permissions.
_If you are going to use Sagemaker in a local environment (not SageMaker Studio or Notebook Instances). You need access to an IAM Role with the required permissions for Sagemaker. You can find [here](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) more about it._
```python
import sagemaker
import boto3
sess = sagemaker.Session()
# sagemaker session bucket -> used for uploading data, models and logs
# sagemaker will automatically create this bucket if it not exists
sagemaker_session_bucket=None
if sagemaker_session_bucket is None and sess is not None:
# set to default bucket if a bucket name is not given
sagemaker_session_bucket = sess.default_bucket()
try:
role = sagemaker.get_execution_role()
except ValueError:
iam = boto3.client('iam')
role = iam.get_role(RoleName='sagemaker_execution_role')['Role']['Arn']
sess = sagemaker.Session(default_bucket=sagemaker_session_bucket)
print(f"sagemaker role arn: {role}")
print(f"sagemaker bucket: {sess.default_bucket()}")
print(f"sagemaker session region: {sess.boto_region_name}")
```
## 2. Deploy a segformer model to Amazon SageMaker for image segmentation
Image Segmentation divides an image into segments where each pixel in the image is mapped to an object. This task has multiple variants such as instance segmentation, panoptic segmentation and semantic segmentation.
We use the [nvidia/segformer-b0-finetuned-ade-512-512](https://huggingface.co/nvidia/segformer-b0-finetuned-ade-512-512) model running our segmentation endpoint. This model is fine-tuned on ADE20k (scene-centric image) at resolution 512x512.
```python
from sagemaker.huggingface.model import HuggingFaceModel
from sagemaker.serializers import DataSerializer
# Hub Model configuration. <https://huggingface.co/models>
hub = {
'HF_MODEL_ID':'nvidia/segformer-b0-finetuned-ade-512-512',
'HF_TASK':'image-segmentation',
}
# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
env=hub, # configuration for loading model from Hub
role=role, # iam role with permissions to create an Endpoint
transformers_version="4.17", # transformers version used
pytorch_version="1.10", # pytorch version used
py_version='py38', # python version used
)
```
Before we are able to deploy our `HuggingFaceModel` class we need to create a new serializer, which supports our audio data. The Serializer are used in Predictor and in the `predict` method to serializer our data to a specific `mime-type`, which send to the endpoint. The default serialzier for the HuggingFacePredcitor is a JSNON serializer, but since we are not going to send text data to the endpoint we will use the DataSerializer.
```python
# create a serializer for the data
image_serializer = DataSerializer(content_type='image/x-image') # using x-image to support multiple image formats
# deploy model to SageMaker Inference
predictor = huggingface_model.deploy(
initial_instance_count=1, # number of instances
instance_type='ml.g4dn.xlarge', # ec2 instance type
serializer=image_serializer, # serializer for our audio data.
)
```
## 3. Send requests to the endpoint to do image segmentation.
The `.deploy()` returns an `HuggingFacePredictor` object with our `DataSerializer` which can be used to request inference. This `HuggingFacePredictor` makes it easy to send requests to your endpoint and get the results back.
We will use 2 different methods to send requests to the endpoint:
a. Provide a image file via path to the predictor
b. Provide binary image data object to the predictor
### a. Provide a image file via path to the predictor
Using a image file as input is easy as easy as providing the path to its location. The `DataSerializer` will then read it and send the bytes to the endpoint.
We can use a `fixtures_ade20k` sample hosted on huggingface.co
```python
!wget https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/raw/main/ADE_val_00000001.jpg
```
before we send our request lest create a helper function to display our segmentation results.
```python
from PIL import Image
import io
from segmentation_mask_overlay import overlay_masks
import numpy as np
import base64
import matplotlib.pyplot as plt
def stringToRGB(base64_string):
# convert base64 string to numpy array
imgdata = base64.b64decode(str(base64_string))
image = Image.open(io.BytesIO(imgdata))
return np.array(image)
def get_overlay(original_image_path,result):
masks = [stringToRGB(r["mask"]).astype('bool') for r in res]
masks_labels = [r["label"] for r in result]
cmap = plt.cm.tab20(np.arange(len(masks_labels)))
image = Image.open(original_image_path)
overlay_masks(image, masks, labels=masks_labels, colors=cmap, mask_alpha=0.5)
```
To send a request with provide our path to the image file we can use the following code:
```python
image_path = "ADE_val_00000001.jpg"
res = predictor.predict(data=image_path)
print(res[0].keys())
get_overlay(image_path,res)
```
![overlay](/static/blog/image-segmentation-sagemaker/example.png)
### b. Provide binary image data object to the predictor
Instead of providing a path to the image file we can also directy provide the bytes of it reading the file in python.
_make sure `ADE_val_00000001.jpg` is in the directory_
```python
image_path = "ADE_val_00000001.jpg"
with open(image_path, "rb") as data_file:
image_data = data_file.read()
res = predictor.predict(data=image_data)
print(res[0].keys())
get_overlay(image_path,res)
```
![overlay](/static/blog/image-segmentation-sagemaker/example.png)
### Clean up
```python
predictor.delete_model()
predictor.delete_endpoint()
```
## Conclusion
We succesfully managed to deploy SegFormer to Amazon SageMaker for image segmentation. The new `DataSerializer` makes it super easy to work with different `mime-types` than `json`/`txt`, which we are used to from NLP. We can use the `DataSerializer` to send images to the endpoint and get the results back.
With this support we can now build state-of-the-art computer vision systems on Amazon SageMaker with transparent insights on which models are used and how the data is processed. We could even go further and extend the inference part with a custom `inference.py` to include custom post-processing. |
Fine-tune FLAN-T5 for chat & dialogue summarization | https://www.philschmid.de/fine-tune-flan-t5 | 2022-12-27 | [
"T5",
"Summarization",
"HuggingFace",
"Chat"
] | Learn how to fine-tune Google's FLAN-T5 for chat & dialogue summarization using Hugging Face Transformers. | In this blog, you will learn how to fine-tune [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) for chat & dialogue summarization using Hugging Face Transformers. If you already know T5, FLAN-T5 is just better at everything. For the same number of parameters, these models have been fine-tuned on more than 1000 additional tasks covering also more languages.
In this example we will use the [samsum](https://huggingface.co/datasets/samsum) dataset a collection of about 16k messenger-like conversations with summaries. Conversations were created and written down by linguists fluent in English.
You will learn how to:
1. [Setup Development Environment](#1-setup-development-environment)
2. [Load and prepare samsum dataset](#2-load-and-prepare-samsum-dataset)
3. [Fine-tune and evaluate FLAN-T5](#3-fine-tune-and-evaluate-flan-t5)
4. [Run Inference and summarize ChatGPT dialogues](#4-run-inference-and-summarize-chatgpt-dialogues)
Before we can start, make sure you have a [Hugging Face Account](https://huggingface.co/join) to save artifacts and experiments.
## Quick intro: FLAN-T5, just a better T5
FLAN-T5 released with the [Scaling Instruction-Finetuned Language Models](https://arxiv.org/pdf/2210.11416.pdf) paper is an enhanced version of T5 that has been finetuned in a mixture of tasks. The paper explores instruction finetuning with a particular focus on (1) scaling the number of tasks, (2) scaling the model size, and (3) finetuning on chain-of-thought data. The paper discovers that overall instruction finetuning is a general method for improving the performance and usability of pretrained language models.
![flan-t5](/static/blog/fine-tune-flan-t5/flan-t5.png)
- Paper: https://arxiv.org/abs/2210.11416
- Official repo: https://github.com/google-research/t5x
---
Now we know what FLAN-T5 is, let's get started. 🚀
_Note: This tutorial was created and run on a p3.2xlarge AWS EC2 Instance including a NVIDIA V100._
## 1. Setup Development Environment
Our first step is to install the Hugging Face Libraries, including transformers and datasets. Running the following cell will install all the required packages.
```python
!pip install pytesseract transformers datasets rouge-score nltk tensorboard py7zr --upgrade
# install git-fls for pushing model and logs to the hugging face hub
!sudo apt-get install git-lfs --yes
```
This example will use the [Hugging Face Hub](https://huggingface.co/models) as a remote model versioning service. To be able to push our model to the Hub, you need to register on the [Hugging Face](https://huggingface.co/join).
If you already have an account, you can skip this step.
After you have an account, we will use the `notebook_login` util from the `huggingface_hub` package to log into our account and store our token (access key) on the disk.
```python
from huggingface_hub import notebook_login
notebook_login()
```
## 2. Load and prepare samsum dataset
we will use the [samsum](https://huggingface.co/datasets/samsum) dataset a collection of about 16k messenger-like conversations with summaries. Conversations were created and written down by linguists fluent in English.
```json
{
"id": "13818513",
"summary": "Amanda baked cookies and will bring Jerry some tomorrow.",
"dialogue": "Amanda: I baked cookies. Do you want some?\r\nJerry: Sure!\r\nAmanda: I'll bring you tomorrow :-)"
}
```
```python
dataset_id = "samsum"
```
To load the `samsum` dataset, we use the `load_dataset()` method from the 🤗 Datasets library.
```python
from datasets import load_dataset
# Load dataset from the hub
dataset = load_dataset(dataset_id)
print(f"Train dataset size: {len(dataset['train'])}")
print(f"Test dataset size: {len(dataset['test'])}")
# Train dataset size: 14732
# Test dataset size: 819
```
Lets checkout an example of the dataset.
```python
from random import randrange
sample = dataset['train'][randrange(len(dataset["train"]))]
print(f"dialogue: \n{sample['dialogue']}\n---------------")
print(f"summary: \n{sample['summary']}\n---------------")
```
To train our model we need to convert our inputs (text) to token IDs. This is done by a 🤗 Transformers Tokenizer. If you are not sure what this means check out [chapter 6](https://huggingface.co/course/chapter6/1?fw=tf) of the Hugging Face Course.
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
model_id="google/flan-t5-base"
# Load tokenizer of FLAN-t5-base
tokenizer = AutoTokenizer.from_pretrained(model_id)
```
before we can start training we need to preprocess our data. Abstractive Summarization is a text2text-generation task. This means our model will take a text as input and generate a summary as output. For this we want to understand how long our input and output will be to be able to efficiently batch our data.
```python
from datasets import concatenate_datasets
# The maximum total input sequence length after tokenization.
# Sequences longer than this will be truncated, sequences shorter will be padded.
tokenized_inputs = concatenate_datasets([dataset["train"], dataset["test"]]).map(lambda x: tokenizer(x["dialogue"], truncation=True), batched=True, remove_columns=["dialogue", "summary"])
max_source_length = max([len(x) for x in tokenized_inputs["input_ids"]])
print(f"Max source length: {max_source_length}")
# The maximum total sequence length for target text after tokenization.
# Sequences longer than this will be truncated, sequences shorter will be padded."
tokenized_targets = concatenate_datasets([dataset["train"], dataset["test"]]).map(lambda x: tokenizer(x["summary"], truncation=True), batched=True, remove_columns=["dialogue", "summary"])
max_target_length = max([len(x) for x in tokenized_targets["input_ids"]])
print(f"Max target length: {max_target_length}")
```
```python
def preprocess_function(sample,padding="max_length"):
# add prefix to the input for t5
inputs = ["summarize: " + item for item in sample["dialogue"]]
# tokenize inputs
model_inputs = tokenizer(inputs, max_length=max_source_length, padding=padding, truncation=True)
# Tokenize targets with the `text_target` keyword argument
labels = tokenizer(text_target=sample["summary"], max_length=max_target_length, padding=padding, truncation=True)
# If we are padding here, replace all tokenizer.pad_token_id in the labels by -100 when we want to ignore
# padding in the loss.
if padding == "max_length":
labels["input_ids"] = [
[(l if l != tokenizer.pad_token_id else -100) for l in label] for label in labels["input_ids"]
]
model_inputs["labels"] = labels["input_ids"]
return model_inputs
tokenized_dataset = dataset.map(preprocess_function, batched=True, remove_columns=["dialogue", "summary", "id"])
print(f"Keys of tokenized dataset: {list(tokenized_dataset['train'].features)}")
```
## 3. Fine-tune and evaluate FLAN-T5
After we have processed our dataset, we can start training our model. Therefore we first need to load our [FLAN-T5](https://huggingface.co/models?search=flan-t5) from the Hugging Face Hub. In the example we are using a instance with a NVIDIA V100 meaning that we will fine-tune the `base` version of the model.
_I plan to do a follow-up post on how to fine-tune the `xxl` version of the model using Deepspeed._
```python
from transformers import AutoModelForSeq2SeqLM
# huggingface hub model id
model_id="google/flan-t5-base"
# load model from the hub
model = AutoModelForSeq2SeqLM.from_pretrained(model_id)
```
We want to evaluate our model during training. The `Trainer` supports evaluation during training by providing a `compute_metrics`.
The most commonly used metrics to evaluate summarization task is [rogue_score](<https://en.wikipedia.org/wiki/ROUGE_(metric)>) short for Recall-Oriented Understudy for Gisting Evaluation). This metric does not behave like the standard accuracy: it will compare a generated summary against a set of reference summaries
We are going to use `evaluate` library to evaluate the `rogue` score.
```python
import evaluate
import nltk
import numpy as np
from nltk.tokenize import sent_tokenize
nltk.download("punkt")
# Metric
metric = evaluate.load("rouge")
# helper function to postprocess text
def postprocess_text(preds, labels):
preds = [pred.strip() for pred in preds]
labels = [label.strip() for label in labels]
# rougeLSum expects newline after each sentence
preds = ["\n".join(sent_tokenize(pred)) for pred in preds]
labels = ["\n".join(sent_tokenize(label)) for label in labels]
return preds, labels
def compute_metrics(eval_preds):
preds, labels = eval_preds
if isinstance(preds, tuple):
preds = preds[0]
decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True)
# Replace -100 in the labels as we can't decode them.
labels = np.where(labels != -100, labels, tokenizer.pad_token_id)
decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)
# Some simple post-processing
decoded_preds, decoded_labels = postprocess_text(decoded_preds, decoded_labels)
result = metric.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=True)
result = {k: round(v * 100, 4) for k, v in result.items()}
prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in preds]
result["gen_len"] = np.mean(prediction_lens)
return result
```
Before we can start training is to create a `DataCollator` that will take care of padding our inputs and labels. We will use the `DataCollatorForSeq2Seq` from the 🤗 Transformers library.
```python
from transformers import DataCollatorForSeq2Seq
# we want to ignore tokenizer pad token in the loss
label_pad_token_id = -100
# Data collator
data_collator = DataCollatorForSeq2Seq(
tokenizer,
model=model,
label_pad_token_id=label_pad_token_id,
pad_to_multiple_of=8
)
```
The last step is to define the hyperparameters (`TrainingArguments`) we want to use for our training. We are leveraging the [Hugging Face Hub](https://huggingface.co/models) integration of the `Trainer` to automatically push our checkpoints, logs and metrics during training into a repository.
```python
from huggingface_hub import HfFolder
from transformers import Seq2SeqTrainer, Seq2SeqTrainingArguments
# Hugging Face repository id
repository_id = f"{model_id.split('/')[1]}-{dataset_id}"
# Define training args
training_args = Seq2SeqTrainingArguments(
output_dir=repository_id,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
predict_with_generate=True,
fp16=False, # Overflows with fp16
learning_rate=5e-5,
num_train_epochs=5,
# logging & evaluation strategies
logging_dir=f"{repository_id}/logs",
logging_strategy="steps",
logging_steps=500,
evaluation_strategy="epoch",
save_strategy="epoch",
save_total_limit=2,
load_best_model_at_end=True,
# metric_for_best_model="overall_f1",
# push to hub parameters
report_to="tensorboard",
push_to_hub=False,
hub_strategy="every_save",
hub_model_id=repository_id,
hub_token=HfFolder.get_token(),
)
# Create Trainer instance
trainer = Seq2SeqTrainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=tokenized_dataset["train"],
eval_dataset=tokenized_dataset["test"],
compute_metrics=compute_metrics,
)
```
We can start our training by using the `train` method of the `Trainer`.
```python
# Start training
trainer.train()
```
![flan-t5-tensorboard](/static/blog/fine-tune-flan-t5/flan-t5-tensorboard.png)
Nice, we have trained our model. 🎉 Lets run evaluate the best model again on the test set.
```python
trainer.evaluate()
```
The best score we achieved is an `rouge1` score of `47.23`.
Lets save our results and tokenizer to the Hugging Face Hub and create a model card.
```python
# Save our tokenizer and create model card
tokenizer.save_pretrained(repository_id)
trainer.create_model_card()
# Push the results to the hub
trainer.push_to_hub()
```
## 4. Run Inference and summarize ChatGPT dialogues
Now we have a trained model, we can use it to run inference. We will use the `pipeline` API from transformers and a `test` example from our dataset.
```python
from transformers import pipeline
from random import randrange
# load model and tokenizer from huggingface hub with pipeline
summarizer = pipeline("summarization", model="philschmid/flan-t5-base-samsum", device=0)
# select a random test sample
sample = dataset['test'][randrange(len(dataset["test"]))]
print(f"dialogue: \n{sample['dialogue']}\n---------------")
# summarize dialogue
res = summarizer(sample["dialogue"])
print(f"flan-t5-base summary:\n{res[0]['summary_text']}")
```
output
```bash
dialogue:
Abby: Have you talked to Miro?
Dylan: No, not really, I've never had an opportunity
Brandon: me neither, but he seems a nice guy
Brenda: you met him yesterday at the party?
Abby: yes, he's so interesting
Abby: told me the story of his father coming from Albania to the US in the early 1990s
Dylan: really, I had no idea he is Albanian
Abby: he is, he speaks only Albanian with his parents
Dylan: fascinating, where does he come from in Albania?
Abby: from the seacoast
Abby: Duress I believe, he told me they are not from Tirana
Dylan: what else did he tell you?
Abby: That they left kind of illegally
Abby: it was a big mess and extreme poverty everywhere
Abby: then suddenly the border was open and they just left
Abby: people were boarding available ships, whatever, just to get out of there
Abby: he showed me some pictures, like <file_photo>
Dylan: insane
Abby: yes, and his father was among the people
Dylan: scary but interesting
Abby: very!
---------------
flan-t5-base summary:
Abby met Miro yesterday at the party. Miro's father came from Albania to the US in the early 1990s. He speaks Albanian with his parents. The border was open and people were boarding ships to get out of there.
```
---
Thanks for reading! If you have any questions, feel free to contact me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Getting started with Pytorch 2.0 and Hugging Face Transformers | https://www.philschmid.de/getting-started-pytorch-2-0-transformers | 2023-03-16 | [
"Pytorch",
"BERT",
"HuggingFace",
"Optimization"
] | Learn how to get started with Pytorch 2.0 and Hugging Face Transformers and reduce your training time up to 2x. | On December 2, 2022, the PyTorch Team announced [PyTorch 2.0](https://pytorch.org/get-started/pytorch-2.0/) at the PyTorch Conference, focused on better performance, being faster, more pythonic, and staying as dynamic as before.
This blog post explains how to get started with PyTorch 2.0 and Hugging Face Transformers today. It will cover how to fine-tune a BERT model for Text Classification using the newest PyTorch 2.0 features.
You will learn how to:
1. [Setup environment & install Pytorch 2.0](#1-setup-environment--install-pytorch-20)
2. [Load and prepare the dataset](#2-load-and-prepare-the-dataset)
3. [Fine-tune & evaluate BERT model with the Hugging Face `Trainer`](#3-fine-tune--evaluate-bert-model-with-the-hugging-face-trainer)
4. [Run Inference & test model](#4-run-inference--test-model)
Before we can start, make sure you have a [Hugging Face Account](https://huggingface.co/join) to save artifacts and experiments.
## Quick intro: Pytorch 2.0
PyTorch 2.0 or, better, 1.14 is entirely backward compatible. Pytorch 2.0 will not require any modification to existing PyTorch code but can optimize your code by adding a single line of code with `model = torch.compile(model)`.
If you ask yourself, why is there a new major version and no breaking changes? The PyTorch team answered this question in their [FAQ](https://pytorch.org/get-started/pytorch-2.0/#faqs): _“We were releasing substantial new features that we believe change how you meaningfully use PyTorch, so we are calling it 2.0 instead.”_
Those new features include top-level support for TorchDynamo, AOTAutograd, PrimTorch, and TorchInductor.
This allows PyTorch 2.0 to achieve a 1.3x-2x training time speedups supporting [today's 46 model architectures](https://github.com/pytorch/torchdynamo/issues/681) from [HuggingFace Transformers](https://github.com/huggingface/transformers)
If you want to learn more about PyTorch 2.0, check out the official [“GET STARTED”](https://pytorch.org/get-started/pytorch-2.0/).
---
Now we know how PyTorch 2.0 works, let's get started. 🚀
_Note: This tutorial was created and run on a g5.xlarge AWS EC2 Instance, including an NVIDIA A10G GPU._
## 1. Setup environment & install Pytorch 2.0
Our first step is to install PyTorch 2.0 and the Hugging Face Libraries, including `transformers` and `datasets`.
```python
# Install PyTorch 2.0 with cuda 11.7
!pip install "torch>=2.0" --extra-index-url https://download.pytorch.org/whl/cu117 --upgrade --quiet
```
Additionally, we are installing the latest version of `transformers` from the `main` git branch, which includes the native integration of PyTorch 2.0 into the `Trainer`.
```python
# Install transformers and dataset
!pip install "transformers==4.27.1" "datasets==2.9.0" "accelerate==0.17.1" "evaluate==0.4.0" tensorboard scikit-learn
# Install git-fls for pushing model and logs to the hugging face hub
!sudo apt-get install git-lfs --yes
```
This example will use the [Hugging Face Hub](https://huggingface.co/models) as a remote model versioning service. To push our model to the Hub, you must register on the [Hugging Face](https://huggingface.co/join). If you already have an account, you can skip this step. After you have an account, we will use the `login` util from the `huggingface_hub` package to log into our account and store our token (access key) on the disk.
```python
from huggingface_hub import login
login(
token="", # ADD YOUR TOKEN HERE
add_to_git_credential=True
)
```
## 2. Load and prepare the dataset
To keep the example straightforward, we are training a Text Classification model on the [BANKING77](https://huggingface.co/datasets/banking77) dataset. The BANKING77 dataset provides a fine-grained set of intents (classes) in a banking/finance domain. It comprises 13,083 customer service queries labeled with 77 intents. It focuses on fine-grained single-domain intent detection.
We will use the `load_dataset()` method from the [🤗 Datasets](https://huggingface.co/docs/datasets/index) library to load the `banking77`
```python
from datasets import load_dataset
# Dataset id from huggingface.co/dataset
dataset_id = "banking77"
# Load raw dataset
raw_dataset = load_dataset(dataset_id)
print(f"Train dataset size: {len(raw_dataset['train'])}")
print(f"Test dataset size: {len(raw_dataset['test'])}")
```
Let’s check out an example of the dataset.
```python
from random import randrange
random_id = randrange(len(raw_dataset['train']))
raw_dataset['train'][random_id]
# {'text': "I can't get google pay to work right.", 'label': 2}
```
To train our model, we need to convert our "Natural Language" to token IDs. This is done by a Tokenizer, which tokenizes the inputs (including converting the tokens to their corresponding IDs in the pre-trained vocabulary) if you want to learn more about this, out [chapter 6](https://huggingface.co/course/chapter6/1?fw=pt) of the [Hugging Face Course](https://huggingface.co/course/chapter1/1).
```python
from transformers import AutoTokenizer
# Model id to load the tokenizer
model_id = "bert-base-uncased"
# Load Tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id)
# Tokenize helper function
def tokenize(batch):
return tokenizer(batch['text'], padding='max_length', truncation=True, return_tensors="pt")
# Tokenize dataset
raw_dataset = raw_dataset.rename_column("label", "labels") # to match Trainer
tokenized_dataset = raw_dataset.map(tokenize, batched=True,remove_columns=["text"])
print(tokenized_dataset["train"].features.keys())
# dict_keys(['input_ids', 'token_type_ids', 'attention_mask','lable'])
```
## 3. Fine-tune & evaluate BERT model with the Hugging Face `Trainer`
After we have processed our dataset, we can start training our model. We will use the bert-base-uncased model. The first step is to load our model with `AutoModelForSequenceClassification` class from the [Hugging Face Hub](https://huggingface.co/bert-base-uncased). This will initialize the pre-trained BERT weights with a classification head on top. Here we pass the number of classes (77) from our dataset and the label names to have readable outputs for inference.
```python
from transformers import AutoModelForSequenceClassification
# Model id to load the tokenizer
model_id = "bert-base-uncased"
# Prepare model labels - useful for inference
labels = tokenized_dataset["train"].features["labels"].names
num_labels = len(labels)
label2id, id2label = dict(), dict()
for i, label in enumerate(labels):
label2id[label] = str(i)
id2label[str(i)] = label
# Download the model from huggingface.co/models
model = AutoModelForSequenceClassification.from_pretrained(
model_id, num_labels=num_labels, label2id=label2id, id2label=id2label
)
```
We evaluate our model during training. The `Trainer` supports evaluation during training by providing a `compute_metrics` method. We use the `evaluate` library to calculate the [f1 metric](https://huggingface.co/spaces/evaluate-metric/f1) during training on our test split.
```python
import evaluate
import numpy as np
# Metric Id
metric = evaluate.load("f1")
# Metric helper method
def compute_metrics(eval_pred):
predictions, labels = eval_pred
predictions = np.argmax(predictions, axis=1)
return metric.compute(predictions=predictions, references=labels, average="weighted")
```
The last step is to define the hyperparameters (`TrainingArguments`) we use for our training. Here we are adding the PyTorch 2.0 introduced features for fast training times. To use the latest improvements of PyTorch 2.0, we only need to pass the `torch_compile` option in the `TrainingArguments`.
We also leverage the [Hugging Face Hub](https://huggingface.co/models) integration of the `Trainer` to push our checkpoints, logs, and metrics during training into a repository.
```python
from huggingface_hub import HfFolder
from transformers import Trainer, TrainingArguments
# Id for remote repository
repository_id = "bert-base-banking77-pt2"
# Define training args
training_args = TrainingArguments(
output_dir=repository_id,
per_device_train_batch_size=16,
per_device_eval_batch_size=8,
learning_rate=5e-5,
num_train_epochs=3,
# PyTorch 2.0 specifics
bf16=True, # bfloat16 training
torch_compile=True, # optimizations
optim="adamw_torch_fused", # improved optimizer
# logging & evaluation strategies
logging_dir=f"{repository_id}/logs",
logging_strategy="steps",
logging_steps=200,
evaluation_strategy="epoch",
save_strategy="epoch",
save_total_limit=2,
load_best_model_at_end=True,
metric_for_best_model="f1",
# push to hub parameters
report_to="tensorboard",
push_to_hub=True,
hub_strategy="every_save",
hub_model_id=repository_id,
hub_token=HfFolder.get_token(),
)
# Create a Trainer instance
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_dataset["train"],
eval_dataset=tokenized_dataset["test"],
compute_metrics=compute_metrics,
)
```
We can start our training by using the `train` method of the `Trainer`.
```python
# Start training
trainer.train()
```
![tensorboard](/static/blog/getting-started-pytorch-2-0-transformers/tensorboard.png)
Using Pytorch 2.0 and supported features in `transformers` allows us train our BERT model on `10_000` samples within `457.7964` seconds.
We also ran the training without the `torch_compile` option to compare the training times. The training without `torch_compile` took 457 seconds, had a `train_samples_per_second` value of 65.55 and an `f1` score of `0.931`.
```bash
{'train_runtime': 696.2701, 'train_samples_per_second': 43.1, 'eval_f1': 0.928788}
```
By using the `torch_compile` option and the `adamw_torch_fused` optimized , we can see that the training time is reduced by 52.5% compared to the training without PyTorch 2.0.
```bash
{'train_runtime': 457.7964, 'train_samples_per_second': 65.55, 'eval_f1': 0.931773}
```
Our absoulte training time went down from 696s to 457. The `train_samples_per_second` value increased from 43 to 65. The `f1` score is the same/slighty better than the training without `torch_compile`.
Pytorch 2.0 is incredible powerful! 🚀
Lets save our results and tokenizer to the Hugging Face Hub and create a model card.
```python
# Save processor and create model card
tokenizer.save_pretrained(repository_id)
trainer.create_model_card()
trainer.push_to_hub()
```
## 4. Run Inference & test model
To wrap up this tutorial, we will run inference on a few examples and test our model. We will use the `pipeline` method from the `transformers` library to run inference on our model.
```python
from transformers import pipeline
# load model from huggingface.co/models using our repository id
classifier = pipeline("sentiment-analysis", model=repository_id, tokenizer=repository_id, device=0)
sample = "I have been waiting longer than expected for my bank card, could you provide information on when it will arrive?"
pred = classifier(sample)
print(pred)
# [{'label': 'card_arrival', 'score': 0.9903606176376343}]
```
## Conclusion
In this tutorial, we learned how to use PyTorch 2.0 to train a text classification model on the BANKING77 dataset. We saw that PyTorch 2.0 is a powerful tool to speed up your training times. In our example running on a NVIDIA A10G we managed to achieve 52.5% better performance. The Hugging Face Trainer allows you to easily integrate PyTorch 2.0 into your training pipeline by simply adding the `torch_compile` option to the `TrainingArguments`. We can further benefit from PyTorch 2.0 by using the new fused AdamW optimizer when bf16 is available.
Additionally, I want to mentioned that we reduced the training time by 52%, which could be interpreted in a cost saving of 52% for the training or in 52% faster iterations cycles and time to production. You should be able to see even better improvements by using A100 GPUs or by reducing the "Trainer" overhead, e.g. removing evaluation and logging.
PyTorch 2.0 is now officially launched and we are excited to see what the future brings. 🚀
---
Thanks for reading! If you have any questions, feel free to contact me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Deploy T5 11B for inference for less than $500 | https://www.philschmid.de/deploy-t5-11b | 2022-10-25 | [
"HuggingFace",
"Transformers",
"Endpoints",
"bnb"
] | Learn how to deploy T5 11B on a single GPU using Hugging Face Inference Endpoints. | This blog will teach you how to deploy [T5 11B](https://huggingface.co/t5-11b) for inference using [Hugging Face Inference Endpoints](https://huggingface.co/inference-endpoints). The T5 model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) paper and is one of the most used and known Transformer models today.
T5 is an encoder-decoder model pre-trained on a multi-task mixture of unsupervised and supervised tasks and for which each task is converted into a text-to-text format. T5 works well on various tasks out-of-the-box by prepending a different prefix to the input corresponding to each task, e.g., for translation: _`translate English to German: …`_, for summarization: _`summarize: ...`_
![t5.png](/static/blog/deploy-t5-11b/t5.png)
Before we can get started, make sure you meet all of the following requirements:
1. An Organization/User with an active plan and _WRITE_ access to the model repository.
2. You can access the UI: [https://ui.endpoints.huggingface.co](https://ui.endpoints.huggingface.co/endpoints)
The Tutorial will cover how to:
1. [Prepare model repository, custom handler, and additional dependencies](#1-prepare-model-repository-custom-handler-and-additional-dependencies)
2. [Deploy the custom handler as an Inference Endpoint](#2-deploy-the-custom-handler-as-an-inference-endpoint)
3. [Send HTTP request using Python](#3-send-http-request-using-python)
## What is Hugging Face Inference Endpoints?
[🤗 Inference Endpoints](https://huggingface.co/inference-endpoints) offers a secure production solution to easily deploy Machine Learning models on dedicated and autoscaling infrastructure managed by Hugging Face.
A Hugging Face Inference Endpoint is built from a [Hugging Face Model Repository](https://huggingface.co/models). It supports all the [Transformers and Sentence-Transformers tasks](https://huggingface.co/docs/inference-endpoints/supported_tasks) and any arbitrary ML Framework through easy customization by adding a [custom inference handler.](https://huggingface.co/docs/inference-endpoints/guides/custom_handler) This [custom inference handler](https://huggingface.co/docs/inference-endpoints/guides/custom_handler) can be used to implement simple inference pipelines for ML Frameworks like Keras, Tensorflow, and scit-kit learn or can be used to add custom business logic to your existing transformers pipeline.
## Tutorial: Deploy T5-11B on a single NVIDIA T4
In this tutorial, you will learn how to deploy [T5 11B](https://huggingface.co/t5-11b) for inference using [Hugging Face Inference Endpoints](https://huggingface.co/inference-endpoints) and how you can integrate it via an API into your products.
## 1. Prepare model repository, custom handler, and additional dependencies
[T5 11B](https://huggingface.co/t5-11b) is, with 11 billion parameters of the largest openly available Transformer models. The weights in float32 are 45.2GB and are normally too big to deploy on an NVIDIA T4 with 16GB of GPU memory.
To be able to fit T5-11b into a single GPU, we are going to use two techniques:
- **mixed precision and sharding:** Converting the weights to fp16 will reduce the memory footprint by 2x, and sharding will allow us to easily place each “shard” on a GPU without the need to load the model into CPU memory first.
- **LLM.int8():** introduces a new quantization technique for Int8 matrix multiplication, which cuts the memory needed for inference by half while. To learn more about check out this [blog post](https://huggingface.co/blog/hf-bitsandbytes-integration) or the [paper](https://arxiv.org/abs/2208.07339).
We already prepared a repository with sharded fp16 weights of `T5-11B` on the Hugging Face Hub at: [philschmid/t5-11b-sharded](https://huggingface.co/philschmid/t5-11b-sharded). Those weights were created using the following snippet.
_Note: If you want to convert the weights yourself, e.g. to deploy [google/flan-t5-xxl](https://huggingface.co/google/flan-t5-xxl) you need at least 80GB of memory._
```python
import torch
from transformers import AutoModelWithLMHead
from huggingface_hub import HfApi
# load model as float16
model = AutoModelWithLMHead.from_pretrained("t5-11b", torch_dtype=torch.float16, low_cpu_mem_usage=True)
# shard model an push to hub
model.save_pretrained("sharded", max_shard_size="2000MB")
# push to hub
api = HfApi()
api.upload_folder(
folder_path="sharded",
repo_id="philschmid/t5-11b-sharded-fp16",
)
```
After we have our sharded fp16 model weights, we can prepare the additional dependencies we will need to use the \***\*LLM.int8().\*\*** LLM.int8() has been natively integrated into `transformers` through [bitsandbytes](https://github.com/TimDettmers/bitsandbytes).
To [add custom dependencies](https://huggingface.co/docs/inference-endpoints/guides/custom_dependencies), we need to add a **`requirements.txt`** file to your model repository on the Hugging Face Hub with the Python dependencies you want to install.
```python
accelerate==0.13.2
bitsandbytes
```
The last step before creating our Inference Endpoint is to [create a custom Inference Handler](https://huggingface.co/docs/inference-endpoints/guides/custom_handler). If you want to learn how to create a custom Handler for Inference Endpoints, you can either checkout the [documentation](https://huggingface.co/docs/inference-endpoints/guides/custom_handler) or go through [“Custom Inference with Hugging Face Inference Endpoints”](https://www.philschmid.de/custom-inference-handler).
```python
from typing import Dict, List, Any
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
import torch
class EndpointHandler:
def __init__(self, path=""):
# load model and processor from path
self.model = AutoModelForSeq2SeqLM.from_pretrained(path, device_map="auto", load_in_8bit=True)
self.tokenizer = AutoTokenizer.from_pretrained(path)
def __call__(self, data: Dict[str, Any]) -> Dict[str, str]:
"""
Args:
data (:obj:):
includes the deserialized image file as PIL.Image
"""
# process input
inputs = data.pop("inputs", data)
parameters = data.pop("parameters", None)
# preprocess
input_ids = self.tokenizer(inputs, return_tensors="pt").input_ids
# pass inputs with all kwargs in data
if parameters is not None:
outputs = self.model.generate(input_ids, **parameters)
else:
outputs = self.model.generate(input_ids)
# postprocess the prediction
prediction = self.tokenizer.decode(outputs[0], skip_special_tokens=True)
return [{"generated_text": prediction}]
```
## 2. Deploy the custom handler as an Inference Endpoint
UI: [https://ui.endpoints.huggingface.co/](https://ui.endpoints.huggingface.co/)
Since we prepared our model weights, dependencies and custom handler we can now deploy our model as an Inference Endpoint. We can deploy our custom Custom Handler the same way as a regular Inference Endpoint.
![model id](/static/blog/deploy-t5-11b/model.png)
Select the repository, the cloud, and the region. After that we need to open the “Advanced Settings” to select `GPU • small • 1x NIVIDA Tesla T4` .
_Note: If you are trying to deploy the model on CPU the creation will fail_
![model id](/static/blog/deploy-t5-11b/instance.png)
The Inference Endpoint Service will check during the creation of your Endpoint if there is a `handler.py` available and will use it for serving requests no matter which “Task” you select.
The deployment can take 20-40 minutes due to the image artifact's model size (~30GB) build. After deploying our endpoint, we can test it using the inference widget.
![model id](/static/blog/deploy-t5-11b/inference.png)
## 3. Send HTTP request using Python
Hugging Face Inference endpoints can be used with an HTTP client in any language. We will use Python and the `requests` library to send our requests. (make your you have it installed `pip install requests`)
```python
import json
import requests as r
ENDPOINT_URL=""# url of your endpoint
HF_TOKEN=""
# payload samples
regular_payload = { "inputs": "translate English to German: The weather is nice today." }
parameter_payload = {
"inputs": "translate English to German: Hello my name is Philipp and I am a Technical Leader at Hugging Face",
"parameters" : {
"max_length": 40,
}
}
# HTTP headers for authorization
headers= {
"Authorization": f"Bearer {HF_TOKEN}",
"Content-Type": "application/json"
}
# send request
response = r.post(ENDPOINT_URL, headers=headers, json=paramter_payload)
generated_text = response.json()
print(generated_text)
```
## Conclusion
That's it we successfully deploy our `T5-11b` to Hugging Face Inference Endpoints for less than $500.
To underline this again, we deployed one of the biggest available transformers in a managed, secure, scalable inference endpoint. This will allow Data scientists and Machine Learning Engineers to focus on R&D, improving the model rather than fiddling with MLOps topics.
Now, it's your turn! [Sign up](https://ui.endpoints.huggingface.co/new) and create your custom handler within a few minutes!
---
Thanks for reading! If you have any questions, feel free to contact me, through [Github](https://github.com/huggingface/transformers), or on the [forum](https://discuss.huggingface.co/c/optimum/59). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Scalable, Secure Hugging Face Transformer Endpoints with Amazon SageMaker, AWS Lambda, and CDK | https://www.philschmid.de/huggingface-transformers-cdk-sagemaker-lambda | 2021-10-06 | [
"AWS",
"BERT",
"HuggingFace",
"Sagemaker"
] | Deploy Hugging Face Transformers to Amazon SageMaker and create an API for the Endpoint using AWS Lambda, API Gateway and AWS CDK. | Researchers, Data Scientists, Machine Learning Engineers are excellent at creating models to achieve new state-of-the-art performance on different tasks, but deploying those models in an accessible, scalable, and secure way is more of an art than science. Commonly, those skills are found in software engineering and DevOps. [Venturebeat](https://venturebeat.com/2019/07/19/why-do-87-of-data-science-projects-never-make-it-into-production/) reports that 87% of data science projects never make it to production, while [redapt](https://www.redapt.com/blog/why-90-of-machine-learning-models-never-make-it-to-production#:~:text=During%20a%20panel%20at%20last,actually%20make%20it%20into%20production.) claims it to be 90%.
We partnered up with AWS and the Amazon SageMaker team to reduce those numbers. Together we built 🤗 Transformers optimized Deep Learning Containers to accelerate and secure training and deployment of Transformers-based models. If you want to know more about the collaboration, take a look [here](https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face).
In this blog, we are going to use the [AWS Cloud Development Kit (AWS CDK)](https://aws.amazon.com/cdk/?nc1=h_ls) to create our infrastructure and automatically deploy our model from the [Hugging Face Hub](https://huggingface.co/models) to the AWS Cloud. The AWS CDK uses the expressive of modern programming languages, like `Python` to model and deploy your applications as code. In our example, we are going to build an application using the Hugging Face Inference DLC for model serving and Amazon [API Gateway](https://aws.amazon.com/de/api-gateway/) with [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) for building a secure accessible API. The AWS Lambda will be used as a client proxy to interact with our SageMaker Endpoint.
![architecture](/static/blog/huggingface-transformers-cdk-sagemaker-lambda/architecture.png)
If you’re not familiar with Amazon SageMaker: _“Amazon SageMaker is a fully managed service that provides every developer and data scientist with the ability to build, train, and deploy machine learning (ML) models quickly. SageMaker removes the heavy lifting from each step of the machine learning process to make it easier to develop high-quality models.”_ [[REF]](https://aws.amazon.com/sagemaker/faqs/)
You find the complete code for it in this [Github repository](https://github.com/philschmid/cdk-samples/tree/master/aws-lambda-sagemaker-endpoint-huggingface).
---
## Tutorial
Before we get started, make sure you have the [AWS CDK installed](https://docs.aws.amazon.com/cdk/latest/guide/getting_started.html#getting_started_install) and [configured your AWS credentials](https://docs.aws.amazon.com/cdk/latest/guide/getting_started.html#getting_started_prerequisites).
**What are we going to do:**
- selecting a model from the [Hugging Face Hub](https://huggingface.co/models)
- bootstrap our CDK project
- Deploy the model using CDK
- Run inference and test the API
### 1. Selecting a model from the [Hugging Face Hub](https://huggingface.co/models)
For those of you who don't what the Hugging Face Hub is you should definitely take a look [here](https://huggingface.co/docs/hub/main). But the TL;DR; is that the Hugging Face Hub is an open community-driven collection of state-of-the-art models. At the time of writing the blog post, we have 17,501 available free models to use.
To select the model we want to use we navigate to [hf.co/models](http://hf.co/models) then pre-filter using the task on the left, e.g. `summarization`. For this blog post, I went with the [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6), which was fine-tuned on CNN articles for summarization.
![Hugging Face Hub](/static/blog/huggingface-transformers-cdk-sagemaker-lambda/hub.png)
### 2. Bootstrap our CDK project
Deploying applications using the CDK may require additional resources for CDK to store for example assets. The process of provisioning these initial resources is called [bootstrapping](https://docs.aws.amazon.com/cdk/latest/guide/bootstrapping.html). So before being able to deploy our application, we need to make sure that we bootstrapped our project.
```json
cdk bootstrap
```
### 3. Deploy the model using CDK
Now we are able to deploy our application with the whole infrastructure and deploy our previous selected Transformer `sshleifer/distilbart-cnn-12-6` to Amazon SageMaker. Our application uses the [CDK context](https://docs.aws.amazon.com/cdk/latest/guide/context.html) to accept dynamic parameters for the deployment. We can provide our model with as key `model` and our task as key `task` . The application allows further configuration, like passing a different `instance_type` when deploying. You can find the whole list of arguments in the [repository](https://github.com/philschmid/cdk-samples/tree/master/aws-lambda-sagemaker-endpoint-huggingface#context).
In our case we will provide `model=sshleifer/distilbart-cnn-12-6` and `task=summarization` with the a GPU instance `instance_type=ml.g4dn.xlarge`.
```json
cdk deploy \
-c model="sshleifer/distilbart-cnn-12-6" \
-c task="summarization" \
-c instance_type="ml.g4dn.xlarge"
```
After running the `cdk deploy` command we will get an output of all resources, which are going to be created. We then confirm our deployment and the CDK will create all required resources, deploy our AWS Lambda function and our Model to Amazon SageMaker. This takes around 3-5 minutes.
After the deployment the console output should look similar to this.
```json
✅ HuggingfaceSagemakerEndpoint
Outputs:
HuggingfaceSagemakerEndpoint.hfapigwEndpointE75D67B4 = https://r7rch77fhj.execute-api.us-east-1.amazonaws.com/prod/
Stack ARN:
arn:aws:cloudformation:us-east-1:558105141721:stack/HuggingfaceSagemakerEndpoint/6eab9e10-269b-11ec-86cc-0af6d09e2aab
```
### 4. Run inference and test the API
After the deployment is successfully complete we can grap our Endpoint URL `HuggingfaceSagemakerEndpoint.hfapigwEndpointE75D67B4` from the CLI output and use any REST-client to test it.
![insomnia request](/static/blog/huggingface-transformers-cdk-sagemaker-lambda/request.png)
the request as curl to copy
```json
curl --request POST \
--url https://r7rch77fhj.execute-api.us-east-1.amazonaws.com/prod/ \
--header 'Content-Type: application/json' \
--data '{
"inputs": "Hugging Face, the winner of VentureBeat’s Innovation in Natural Language Process/Understanding Award for 2021, is looking to level the playing field. The team, launched by Clément Delangue and Julien Chaumond in 2016, was recognized for its work in democratizing NLP, the global market value for which is expected to hit $35.1 billion by 2026. This week, Google’s former head of Ethical AI Margaret Mitchell joined the team. Hugging Face is also knee-deep in a project called BigScience, an international, multi-company, multi-university research project with over 500 researchers, designed to better understand and improve results on large language models."
}
'
```
## Conclusion
With the help of the AWS CDK we were able to deploy all required Infrastructure for our API by defining them in a programmatically familiar language we know and use. The Hugging Face Inference DLC allowed us to deploy a model from the [Hugging Face Hub](https://huggingface.co/), with out writing a single line of inference code and we are now able to use our public exposed API in securely in any applications, service or frontend we want.
To optimize the solution you can tweek the CDK template to your needs, e.g. add a VPC to the AWS Lambda and the SageMaker Endpoint to accelerate communication between those two.
---
You can find the code [here](https://github.com/philschmid/cdk-samples/tree/master/aws-lambda-sagemaker-endpoint-huggingface) and feel free open a thread the [forum](https://discuss.huggingface.co/).
Thanks for reading. If you have any questions, feel free to contact me, through [Github](https://github.com/huggingface/transformers), or on the [forum](https://discuss.huggingface.co/c/sagemaker/17). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Deploy FLAN-UL2 20B on Amazon SageMaker | https://www.philschmid.de/deploy-flan-ul2-sagemaker | 2023-03-20 | [
"GenerativeAI",
"SageMaker",
"HuggingFace",
"Inference"
] | Learn how to deploy Google's FLAN-UL 20B on Amazon SageMaker for inference. | Welcome to this Amazon SageMaker guide on how to deploy the [FLAN-UL2 20B](https://huggingface.co/google/flan-ul2) on Amazon SageMaker for inference. We will deploy [google/flan-ul2](https://huggingface.co/google/flan-ul2) to Amazon SageMaker for real-time inference using Hugging Face Inference Deep Learning Container.
![flan-ul2-on-amazon-sagemaker](/static/blog/deploy-flan-ul2-sagemaker/sagemaker-endpoint.png)
What we are going to do
1. Create FLAN-UL2 20B inference script
2. Create SageMaker `model.tar.gz` artifact
3. Deploy the model to Amazon SageMaker
4. Run inference using the deployed model
## Quick intro: FLAN-UL2, a bigger FLAN-T5
Flan-UL2 is an encoder decoder (seq2seq) model based on the T5 architecture. It uses the same configuration as the UL2 model released earlier last year. It was fine tuned using the "Flan" prompt tuning and dataset collection. FLAN-UL2 was trained as part of the [Scaling Instruction-Finetuned Language Models](https://arxiv.org/pdf/2210.11416.pdf) paper. Noticeable difference to FLAN-T5 XXL are:
- FLAN-UL2 has context window of 2048 compared to 512 for FLAN-T5 XXL
- +~3% better performance than FLAN-T5 XXL on [benchmarks](https://huggingface.co/google/flan-ul2#performance-improvment)
![flan-ul2](/static/blog/deploy-flan-ul2-sagemaker/flan.webp)
- Paper: https://arxiv.org/abs/2210.11416
- Official repo: https://github.com/google-research/t5x
---
Before we can get started we have to install the missing dependencies to be able to create our `model.tar.gz` artifact and create our Amazon SageMaker endpoint.
We also have to make sure we have the permission to create our SageMaker Endpoint.
```python
!pip install "sagemaker>=2.140.0" boto3 "huggingface_hub==0.13.0" "hf-transfer" --upgrade
```
_If you are going to use Sagemaker in a local environment (not SageMaker Studio or Notebook Instances). You need access to an IAM Role with the required permissions for Sagemaker. You can find [here](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) more about it._
```python
import sagemaker
import boto3
sess = sagemaker.Session()
# sagemaker session bucket -> used for uploading data, models and logs
# sagemaker will automatically create this bucket if it not exists
sagemaker_session_bucket=None
if sagemaker_session_bucket is None and sess is not None:
# set to default bucket if a bucket name is not given
sagemaker_session_bucket = sess.default_bucket()
try:
role = sagemaker.get_execution_role()
except ValueError:
iam = boto3.client('iam')
role = iam.get_role(RoleName='sagemaker_execution_role')['Role']['Arn']
sess = sagemaker.Session(default_bucket=sagemaker_session_bucket)
print(f"sagemaker role arn: {role}")
print(f"sagemaker bucket: {sess.default_bucket()}")
print(f"sagemaker session region: {sess.boto_region_name}")
```
## Create FLAN-UL2 20B inference script
Amazon SageMaker allows us to customize the inference script by providing a `inference.py` file. The `inference.py` file is the entry point to our model. It is responsible for loading the model and handling the inference request. If you are used to deploying Hugging Face Transformers that might be new to you. Usually, we just provide the `HF_MODEL_ID` and `HF_TASK` and the Hugging Face DLC takes care of the rest. For `FLAN-UL2` thats not yet possible. We have to provide the `inference.py` file and implement the `model_fn` and `predict_fn` functions to efficiently load the 11B large model.
If you want to learn more about creating a custom inference script you can check out [Creating document embeddings with Hugging Face's Transformers & Amazon SageMaker](https://www.philschmid.de/custom-inference-huggingface-sagemaker)
In addition to the `inference.py` file we also have to provide a `requirements.txt` file. The `requirements.txt` file is used to install the dependencies for our `inference.py` file.
The first step is to create a `code/` directory.
```python
!mkdir code
```
As next we create a `requirements.txt` file and add the `accelerate` to it. The `accelerate` library is used efficiently to load the model on multiple GPUs.
```python
%%writefile code/requirements.txt
accelerate==0.18.0
transformers==4.27.2
```
The last step for our inference handler is to create the `inference.py` file. The `inference.py` file is responsible for loading the model and handling the inference request. The `model_fn` function is called when the model is loaded. The `predict_fn` function is called when we want to do inference.
We are using the `AutoModelForSeq2SeqLM` class from transformers load the model from the local directory (`model_dir`) in the `model_fn`. In the `predict_fn` function we are using the `generate` function from transformers to generate the text for a given input prompt.
```python
%%writefile code/inference.py
from typing import Dict, List, Any
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
import torch
def model_fn(model_dir):
# load model and processor from model_dir
model = AutoModelForSeq2SeqLM.from_pretrained(model_dir,
device_map="auto",
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained(model_dir)
return model, tokenizer
def predict_fn(data, model_and_tokenizer):
# unpack model and tokenizer
model, tokenizer = model_and_tokenizer
# process input
inputs = data.pop("inputs", data)
parameters = data.pop("parameters", None)
# preprocess
input_ids = tokenizer(inputs, return_tensors="pt").input_ids
# pass inputs with all kwargs in data
if parameters is not None:
outputs = model.generate(input_ids, **parameters)
else:
outputs = model.generate(input_ids)
# postprocess the prediction
prediction = tokenizer.decode(outputs[0], skip_special_tokens=True)
return [{"generated_text": prediction}]
```
## Create SageMaker `model.tar.gz` artifact
To use our `inference.py` we need to bundle it together with our model weights into a `model.tar.gz`. The archive includes all our model-artifcats to run inference. The `inference.py` script will be placed into a `code/` folder. We will use the `huggingface_hub` SDK to easily download[google/flan-ul2](https://huggingface.co/google/flan-ul2) from [Hugging Face](https://hf.co/models) and then upload it to Amazon S3 with the `sagemaker` SDK.
Make sure the enviornment has enough diskspace to store the model, ~35GB should be enough.
```python
from distutils.dir_util import copy_tree
from pathlib import Path
import os
# set HF_HUB_ENABLE_HF_TRANSFER env var to enable hf-transfer for faster downloads
os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
from huggingface_hub import snapshot_download
HF_MODEL_ID="google/flan-ul2"
# create model dir
model_tar_dir = Path(HF_MODEL_ID.split("/")[-1])
model_tar_dir.mkdir(exist_ok=True)
# Download model from Hugging Face into model_dir
snapshot_download(HF_MODEL_ID, local_dir=str(model_tar_dir), local_dir_use_symlinks=False)
# copy code/ to model dir
copy_tree("code/", str(model_tar_dir.joinpath("code")))
```
Before we can upload the model to Amazon S3 we have to create a `model.tar.gz` archive. Important is that the archive should directly contain all files and not a folder with the files. For example, your file should look like this:
```
model.tar.gz/
|- config.json
|- pytorch_model-00001-of-00012.bin
|- tokenizer.json
|- ...
```
```python
parent_dir=os.getcwd()
# change to model dir
os.chdir(str(model_tar_dir))
# use pigz for faster and parallel compression
!tar -cf model.tar.gz --use-compress-program=pigz *
# change back to parent dir
os.chdir(parent_dir)
```
After we created the `model.tar.gz` archive we can upload it to Amazon S3. We will use the `sagemaker` SDK to upload the model to our sagemaker session bucket.
```python
from sagemaker.s3 import S3Uploader
# upload model.tar.gz to s3
s3_model_uri = S3Uploader.upload(local_path=str(model_tar_dir.joinpath("model.tar.gz")), desired_s3_uri=f"s3://{sess.default_bucket()}/flan-ul2")
print(f"model uploaded to: {s3_model_uri}")
```
## Deploy the model to Amazon SageMaker
After we have uploaded our model archive we can deploy our model to Amazon SageMaker. We will use `HuggingfaceModel` to create our real-time inference endpoint.
We are going to deploy the model to an `g5.12xlarge` instance. The `g5.12xlarge` instance is a GPU instance with 4x NVIDIA A10G GPU. If you are interested in how you could add autoscaling to your endpoint you can check out [Going Production: Auto-scaling Hugging Face Transformers with Amazon SageMaker](https://www.philschmid.de/auto-scaling-sagemaker-huggingface).
```python
from sagemaker.huggingface.model import HuggingFaceModel
# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
model_data=s3_model_uri, # path to your model and script
role=role, # iam role with permissions to create an Endpoint
transformers_version="4.26", # transformers version used
pytorch_version="1.13", # pytorch version used
py_version='py39', # python version used
model_server_workers=1
)
# deploy the endpoint endpoint
predictor = huggingface_model.deploy(
initial_instance_count=1,
instance_type="ml.g5.12xlarge",
# container_startup_health_check_timeout=600, # increase timeout for large models
# model_data_download_timeout=600, # increase timeout for large models
)
```
## Run inference using the deployed model
The `.deploy()` returns an `HuggingFacePredictor` object which can be used to request inference using the `.predict()` method. Our endpoint expects a `json` with at least `inputs` key.
When using generative models, most of the time you want to configure or customize your prediction to fit your needs, for example by using beam search, configuring the max or min length of the generated sequence, or adjusting the temperature to reduce repetition.
The Transformers library provides different strategies and kwargs to do this, the Hugging Face Inference toolkit offers the same functionality using the parameters attribute of your request payload. Below you can find examples on how to generate text without parameters, with beam search, and using custom configurations. If you want to learn about different decoding strategies check out this [blog post](https://huggingface.co/blog/how-to-generate).
```python
payload = """Summarize the following text:
Peter and Elizabeth took a taxi to attend the night party in the city. While in the party, Elizabeth collapsed and was rushed to the hospital.
Since she was diagnosed with a brain injury, the doctor told Peter to stay besides her until she gets well.
Therefore, Peter stayed with her at the hospital for 3 days without leaving.
"""
parameters = {
"do_sample": True,
"max_new_tokens": 50,
"top_p": 0.95,
}
# Run prediction
predictor.predict({
"inputs": payload,
"parameters" :parameters
})
# [{'generated_text': 'Peter stayed with Elizabeth at the hospital for 3 days.'}]
```
Lets try another examples! This time we focus ond questions answering with a step by step approach including some simple math.
```python
payload = """Answer the following question step by step:
Roger has 5 tennis balls. He buys 2 more cans of tennis balls.
Each can has 3 tennis balls. How many tennis balls does he have now?
"""
parameters = {
"early_stopping": True,
"length_penalty": 2.0,
"max_new_tokens": 50,
"temperature": 0,
}
# Run prediction
predictor.predict({
"inputs": payload,
"parameters" :parameters
})
# [{'generated_text': 'He buys 2 cans of tennis balls, so he has 2 * 3 = 6 tennis balls. He has 5 + 6 = 11 tennis balls now.'}]
```
### Delete model and endpoint
To clean up, we can delete the model and endpoint.
```python
predictor.delete_model()
predictor.delete_endpoint()
```
---
Thanks for reading! If you have any questions, feel free to contact me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Serverless Inference with Hugging Face's Transformers, DistilBERT and Amazon SageMaker | https://www.philschmid.de/sagemaker-serverless-huggingface-distilbert | 2022-04-21 | [
"HuggingFace",
"AWS",
"BERT",
"Serverless"
] | Learn how to deploy a Transformer model like BERT to Amazon SageMaker Serverless using the Python SageMaker SDK. | [Notebook: serverless_inference](https://github.com/huggingface/notebooks/blob/main/sagemaker/19_serverless_inference/sagemaker-notebook.ipynb)
Welcome to this getting started guide, you learn how to use the Hugging Face Inference DLCs and Amazon SageMaker Python SDK to create a [Serverless Inference](https://docs.aws.amazon.com/sagemaker/latest/dg/serverless-endpoints.html) endpoint.
Amazon SageMaker Serverless Inference is a new capability in SageMaker that enables you to deploy and scale ML models in a Serverless fashion. Serverless endpoints automatically launch compute resources and scale them in and out depending on traffic similar to AWS Lambda.
Serverless Inference is ideal for workloads which have idle periods between traffic spurts and can tolerate cold starts. With a pay-per-use model, Serverless Inference is a cost-effective option if you have an infrequent or unpredictable traffic pattern.
You will learn how to:
- [1. Setup development environment and permissions](#1-setup-development-environment-and-permissions)
- [2. Create and Deploy a Serverless Hugging Face Transformers](#2-create-and-deploy-a-serverless-hugging-face-transformers)
- [3. Send requests to Serverless Inference Endpoint](#3-send-requests-to-serverless-inference-endpoint)
Let's get started! 🚀
### How it works
The following diagram shows the workflow of Serverless Inference.
![architecture](/static/blog/sagemaker-serverless-huggingface-distilbert/serverless.png)
When you create a serverless endpoint, SageMaker provisions and manages the compute resources for you. Then, you can make inference requests to the endpoint and receive model predictions in response. SageMaker scales the compute resources up and down as needed to handle your request traffic, and you only pay for what you use.
### Limitations
Memory size: 1024 MB, 2048 MB, 3072 MB, 4096 MB, 5120 MB, or 6144 MB
Concurrent invocations: 50 per region
Cold starts: ms to seconds. Can be monitored with the `ModelSetupTime` Cloudwatch Metric
_NOTE: You can run this demo in Sagemaker Studio, your local machine, or Sagemaker Notebook Instances_
## 1. Setup development environment and permissions
```python
!pip install sagemaker --upgrade
import sagemaker
assert sagemaker.__version__ >= "2.86.0"
```
_If you are going to use Sagemaker in a local environment (not SageMaker Studio or Notebook Instances). You need access to an IAM Role with the required permissions for Sagemaker. You can find [here](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) more about it._
```python
import sagemaker
import boto3
sess = sagemaker.Session()
# sagemaker session bucket -> used for uploading data, models and logs
# sagemaker will automatically create this bucket if it not exists
sagemaker_session_bucket=None
if sagemaker_session_bucket is None and sess is not None:
# set to default bucket if a bucket name is not given
sagemaker_session_bucket = sess.default_bucket()
try:
role = sagemaker.get_execution_role()
except ValueError:
iam = boto3.client('iam')
role = iam.get_role(RoleName='sagemaker_execution_role')['Role']['Arn']
sess = sagemaker.Session(default_bucket=sagemaker_session_bucket)
print(f"sagemaker role arn: {role}")
print(f"sagemaker bucket: {sess.default_bucket()}")
print(f"sagemaker session region: {sess.boto_region_name}")
```
## 2. Create and Deploy a Serverless Hugging Face Transformers
We use the [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) model running our serverless endpoint. This model is a fine-tune checkpoint of [DistilBERT-base-uncased](https://huggingface.co/distilbert-base-uncased), fine-tuned on SST-2. This model reaches an accuracy of 91.3 on the dev set (for comparison, Bert bert-base-uncased version reaches an accuracy of 92.7).
```python
from sagemaker.huggingface.model import HuggingFaceModel
from sagemaker.serverless import ServerlessInferenceConfig
# Hub Model configuration. <https://huggingface.co/models>
hub = {
'HF_MODEL_ID':'distilbert-base-uncased-finetuned-sst-2-english',
'HF_TASK':'text-classification'
}
# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
env=hub, # configuration for loading model from Hub
role=role, # iam role with permissions to create an Endpoint
transformers_version="4.12", # transformers version used
pytorch_version="1.9", # pytorch version used
py_version='py38', # python version used
)
# Specify MemorySizeInMB and MaxConcurrency in the serverless config object
serverless_config = ServerlessInferenceConfig(
memory_size_in_mb=4096, max_concurrency=10,
)
# deploy the endpoint endpoint
predictor = huggingface_model.deploy(
serverless_inference_config=serverless_config
)
```
## 3. Send requests to Serverless Inference Endpoint
The `.deploy()` returns an `HuggingFacePredictor` object which can be used to request inference. This `HuggingFacePredictor` makes it easy to send requests to your endpoint and get the results back.
_The first request might have some coldstart (2-5s)._
```python
data = {
"inputs": "the mesmerizing performances of the leads keep the film grounded and keep the audience riveted .",
}
res = predictor.predict(data=data)
print(res)
```
### Clean up
```python
predictor.delete_model()
predictor.delete_endpoint()
```
## 4. Conclusion
With the help of the Python SageMaker SDK, we were able to deploy an Amazon SageMaker Serverless Inference Endpoint for Hugging Face Transformers with 1 command (`deploy`).
This will help any large or small company get quickly and cost-effective started with Hugging Face Transformers on AWS. The beauty of Serverless computing will make sure that your Data Science or Machine Learning Team is not spending thousands of dollar while implementing a Proof of Conecpt or at the start of a new Product.
After the PoC was successful or Serverless Inference is not performing well or become more expensive, you can easily deploy your model to real-time endpoints with GPUs just by changing 1 line of code.
You should definitely give SageMaker Serverless Inference a try!
---
Thanks for reading! If you have any questions, feel free to contact me, through [Github](https://github.com/huggingface/transformers), or on the [forum](https://discuss.huggingface.co/c/sagemaker/17). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Efficient Large Language Model training with LoRA and Hugging Face | https://www.philschmid.de/fine-tune-flan-t5-peft | 2023-03-23 | [
"GenerativeAI",
"LoRA",
"HuggingFace",
"Training"
] | Learn how to fine-tune Google's FLAN-T5 XXL on a Single GPU using LoRA And Hugging Face Transformers. | In this blog, we are going to show you how to apply [Low-Rank Adaptation of Large Language Models (LoRA)](https://arxiv.org/abs/2106.09685) to fine-tune FLAN-T5 XXL (11 billion parameters) on a single GPU. We are going to leverage Hugging Face [Transformers](https://huggingface.co/docs/transformers/index), [Accelerate](https://huggingface.co/docs/accelerate/index), and [PEFT](https://github.com/huggingface/peft).
You will learn how to:
1. [Setup Development Environment](#1-setup-development-environment)
2. [Load and prepare the dataset](#2-load-and-prepare-the-dataset)
3. [Fine-Tune T5 with LoRA and bnb int-8](#3-fine-tune-t5-with-lora-and-bnb-int-8)
4. [Evaluate & run Inference with LoRA FLAN-T5](#4-evaluate--run-inference-with-lora-flan-t5)
### Quick intro: PEFT or Parameter Efficient Fine-tuning
[PEFT](https://github.com/huggingface/peft), or Parameter Efficient Fine-tuning, is a new open-source library from Hugging Face to enable efficient adaptation of pre-trained language models (PLMs) to various downstream applications without fine-tuning all the model's parameters. PEFT currently includes techniques for:
- LoRA: [LORA: LOW-RANK ADAPTATION OF LARGE LANGUAGE MODELS](https://arxiv.org/pdf/2106.09685.pdf)
- Prefix Tuning: [P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks](https://arxiv.org/pdf/2110.07602.pdf)
- P-Tuning: [GPT Understands, Too](https://arxiv.org/pdf/2103.10385.pdf)
- Prompt Tuning: [The Power of Scale for Parameter-Efficient Prompt Tuning](https://arxiv.org/pdf/2104.08691.pdf)
_Note: This tutorial was created and run on a g5.2xlarge AWS EC2 Instance, including 1 NVIDIA A10G._
## 1. Setup Development Environment
In our example, we use the [PyTorch Deep Learning AMI](https://docs.aws.amazon.com/dlami/latest/devguide/tutorial-pytorch.html) with already set up CUDA drivers and PyTorch installed. We still have to install the Hugging Face Libraries, including transformers and datasets. Running the following cell will install all the required packages.
```python
# install Hugging Face Libraries
!pip install git+https://github.com/huggingface/peft.git
!pip install "transformers==4.27.2" "datasets==2.9.0" "accelerate==0.17.1" "evaluate==0.4.0" "bitsandbytes==0.37.1" loralib --upgrade --quiet
# install additional dependencies needed for training
!pip install rouge-score tensorboard py7zr
```
## 2. Load and prepare the dataset
we will use the [samsum](https://huggingface.co/datasets/samsum) dataset, a collection of about 16k messenger-like conversations with summaries. Conversations were created and written down by linguists fluent in English.
```python
{
"id": "13818513",
"summary": "Amanda baked cookies and will bring Jerry some tomorrow.",
"dialogue": "Amanda: I baked cookies. Do you want some?\r\nJerry: Sure!\r\nAmanda: I'll bring you tomorrow :-)"
}
```
To load the `samsum` dataset, we use the **`load_dataset()`** method from the 🤗 Datasets library.
```python
from datasets import load_dataset
# Load dataset from the hub
dataset = load_dataset("samsum")
print(f"Train dataset size: {len(dataset['train'])}")
print(f"Test dataset size: {len(dataset['test'])}")
# Train dataset size: 14732
# Test dataset size: 819
```
To train our model, we need to convert our inputs (text) to token IDs. This is done by a 🤗 Transformers Tokenizer. If you are not sure what this means, check out **[chapter 6](https://huggingface.co/course/chapter6/1?fw=tf)** of the Hugging Face Course.
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
model_id="google/flan-t5-xxl"
# Load tokenizer of FLAN-t5-XL
tokenizer = AutoTokenizer.from_pretrained(model_id)
```
Before we can start training, we need to preprocess our data. Abstractive Summarization is a text-generation task. Our model will take a text as input and generate a summary as output. We want to understand how long our input and output will take to batch our data efficiently.
```python
from datasets import concatenate_datasets
import numpy as np
# The maximum total input sequence length after tokenization.
# Sequences longer than this will be truncated, sequences shorter will be padded.
tokenized_inputs = concatenate_datasets([dataset["train"], dataset["test"]]).map(lambda x: tokenizer(x["dialogue"], truncation=True), batched=True, remove_columns=["dialogue", "summary"])
input_lenghts = [len(x) for x in tokenized_inputs["input_ids"]]
# take 85 percentile of max length for better utilization
max_source_length = int(np.percentile(input_lenghts, 85))
print(f"Max source length: {max_source_length}")
# The maximum total sequence length for target text after tokenization.
# Sequences longer than this will be truncated, sequences shorter will be padded."
tokenized_targets = concatenate_datasets([dataset["train"], dataset["test"]]).map(lambda x: tokenizer(x["summary"], truncation=True), batched=True, remove_columns=["dialogue", "summary"])
target_lenghts = [len(x) for x in tokenized_targets["input_ids"]]
# take 90 percentile of max length for better utilization
max_target_length = int(np.percentile(target_lenghts, 90))
print(f"Max target length: {max_target_length}")
```
We preprocess our dataset before training and save it to disk. You could run this step on your local machine or a CPU and upload it to the [Hugging Face Hub](https://huggingface.co/docs/hub/datasets-overview).
```python
def preprocess_function(sample,padding="max_length"):
# add prefix to the input for t5
inputs = ["summarize: " + item for item in sample["dialogue"]]
# tokenize inputs
model_inputs = tokenizer(inputs, max_length=max_source_length, padding=padding, truncation=True)
# Tokenize targets with the `text_target` keyword argument
labels = tokenizer(text_target=sample["summary"], max_length=max_target_length, padding=padding, truncation=True)
# If we are padding here, replace all tokenizer.pad_token_id in the labels by -100 when we want to ignore
# padding in the loss.
if padding == "max_length":
labels["input_ids"] = [
[(l if l != tokenizer.pad_token_id else -100) for l in label] for label in labels["input_ids"]
]
model_inputs["labels"] = labels["input_ids"]
return model_inputs
tokenized_dataset = dataset.map(preprocess_function, batched=True, remove_columns=["dialogue", "summary", "id"])
print(f"Keys of tokenized dataset: {list(tokenized_dataset['train'].features)}")
# save datasets to disk for later easy loading
tokenized_dataset["train"].save_to_disk("data/train")
tokenized_dataset["test"].save_to_disk("data/eval")
```
## 3. Fine-Tune T5 with LoRA and bnb int-8
In addition to the LoRA technique, we will use [bitsanbytes LLM.int8()](https://huggingface.co/blog/hf-bitsandbytes-integration) to quantize out frozen LLM to int8. This allows us to reduce the needed memory for FLAN-T5 XXL ~4x.
The first step of our training is to load the model. We are going to use [philschmid/flan-t5-xxl-sharded-fp16](https://huggingface.co/philschmid/flan-t5-xxl-sharded-fp16), which is a sharded version of [google/flan-t5-xxl](https://huggingface.co/google/flan-t5-xxl). The sharding will help us to not run off of memory when loading the model.
```python
from transformers import AutoModelForSeq2SeqLM
# huggingface hub model id
model_id = "philschmid/flan-t5-xxl-sharded-fp16"
# load model from the hub
model = AutoModelForSeq2SeqLM.from_pretrained(model_id, load_in_8bit=True, device_map="auto")
```
Now, we can prepare our model for the LoRA int-8 training using `peft`.
```python
from peft import LoraConfig, get_peft_model, prepare_model_for_int8_training, TaskType
# Define LoRA Config
lora_config = LoraConfig(
r=16,
lora_alpha=32,
target_modules=["q", "v"],
lora_dropout=0.05,
bias="none",
task_type=TaskType.SEQ_2_SEQ_LM
)
# prepare int-8 model for training
model = prepare_model_for_int8_training(model)
# add LoRA adaptor
model = get_peft_model(model, lora_config)
model.print_trainable_parameters()
# trainable params: 18874368 || all params: 11154206720 || trainable%: 0.16921300163961817
```
As you can see, here we are only training 0.16% of the parameters of the model! This huge memory gain will enable us to fine-tune the model without memory issues.
Next is to create a `DataCollator` that will take care of padding our inputs and labels. We will use the `DataCollatorForSeq2Seq` from the 🤗 Transformers library.
```python
from transformers import DataCollatorForSeq2Seq
# we want to ignore tokenizer pad token in the loss
label_pad_token_id = -100
# Data collator
data_collator = DataCollatorForSeq2Seq(
tokenizer,
model=model,
label_pad_token_id=label_pad_token_id,
pad_to_multiple_of=8
)
```
The last step is to define the hyperparameters (`TrainingArguments`) we want to use for our training.
```python
from transformers import Seq2SeqTrainer, Seq2SeqTrainingArguments
output_dir="lora-flan-t5-xxl"
# Define training args
training_args = Seq2SeqTrainingArguments(
output_dir=output_dir,
auto_find_batch_size=True,
learning_rate=1e-3, # higher learning rate
num_train_epochs=5,
logging_dir=f"{output_dir}/logs",
logging_strategy="steps",
logging_steps=500,
save_strategy="no",
report_to="tensorboard",
)
# Create Trainer instance
trainer = Seq2SeqTrainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=tokenized_dataset["train"],
)
model.config.use_cache = False # silence the warnings. Please re-enable for inference!
```
Let's now train our model and run the cells below. Note that for T5, some layers are kept in `float32` for stability purposes.
```python
# train model
trainer.train()
```
The training took ~10:36:00 and cost `~13.22$` for 10h of training. For comparison a [full fine-tuning on FLAN-T5-XXL](https://www.philschmid.de/fine-tune-flan-t5-deepspeed#3-results--experiments) with the same duration (10h) requires 8x A100 40GBs and costs ~322$.
We can save our model to use it for inference and evaluate it. We will save it to disk for now, but you could also upload it to the [Hugging Face Hub](https://huggingface.co/docs/hub/main) using the `model.push_to_hub` method.
```python
# Save our LoRA model & tokenizer results
peft_model_id="results"
trainer.model.save_pretrained(peft_model_id)
tokenizer.save_pretrained(peft_model_id)
# if you want to save the base model to call
# trainer.model.base_model.save_pretrained(peft_model_id)
```
Our LoRA checkpoint is only 84MB small and includes all of the learnt knowleddge for samsum.
## 4. Evaluate & run Inference with LoRA FLAN-T5
After the training is done we want to evaluate and test it. The most commonly used metric to evaluate summarization task is [rogue_score](<https://en.wikipedia.org/wiki/ROUGE_(metric)>) short for Recall-Oriented Understudy for Gisting Evaluation). This metric does not behave like the standard accuracy: it will compare a generated summary against a set of reference summaries.
We are going to use `evaluate` library to evaluate the `rogue` score. We can run inference using `PEFT` and `transformers`. For our FLAN-T5 XXL model, we need at least 18GB of GPU memory.
```python
import torch
from peft import PeftModel, PeftConfig
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
# Load peft config for pre-trained checkpoint etc.
peft_model_id = "results"
config = PeftConfig.from_pretrained(peft_model_id)
# load base LLM model and tokenizer
model = AutoModelForSeq2SeqLM.from_pretrained(config.base_model_name_or_path, load_in_8bit=True, device_map={"":0})
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
# Load the Lora model
model = PeftModel.from_pretrained(model, peft_model_id, device_map={"":0})
model.eval()
print("Peft model loaded")
```
Let’s load the dataset again with a random sample to try the summarization.
```python
from datasets import load_dataset
from random import randrange
# Load dataset from the hub and get a sample
dataset = load_dataset("samsum")
sample = dataset['test'][randrange(len(dataset["test"]))]
input_ids = tokenizer(sample["dialogue"], return_tensors="pt", truncation=True).input_ids.cuda()
# with torch.inference_mode():
outputs = model.generate(input_ids=input_ids, max_new_tokens=10, do_sample=True, top_p=0.9)
print(f"input sentence: {sample['dialogue']}\n{'---'* 20}")
print(f"summary:\n{tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True)[0]}")
```
Nice! our model works! Now, lets take a closer look and evaluate it against the `test` set of processed dataset from `samsum`. Therefore we need to use and create some utilities to generate the summaries and group them together. The most commonly used metrics to evaluate summarization task is [rogue_score](<https://en.wikipedia.org/wiki/ROUGE_(metric)>) short for Recall-Oriented Understudy for Gisting Evaluation). This metric does not behave like the standard accuracy: it will compare a generated summary against a set of reference summaries.
```python
import evaluate
import numpy as np
from datasets import load_from_disk
from tqdm import tqdm
# Metric
metric = evaluate.load("rouge")
def evaluate_peft_model(sample,max_target_length=50):
# generate summary
outputs = model.generate(input_ids=sample["input_ids"].unsqueeze(0).cuda(), do_sample=True, top_p=0.9, max_new_tokens=max_target_length)
prediction = tokenizer.decode(outputs[0].detach().cpu().numpy(), skip_special_tokens=True)
# decode eval sample
# Replace -100 in the labels as we can't decode them.
labels = np.where(sample['labels'] != -100, sample['labels'], tokenizer.pad_token_id)
labels = tokenizer.decode(labels, skip_special_tokens=True)
# Some simple post-processing
return prediction, labels
# load test dataset from distk
test_dataset = load_from_disk("data/eval/").with_format("torch")
# run predictions
# this can take ~45 minutes
predictions, references = [] , []
for sample in tqdm(test_dataset):
p,l = evaluate_peft_model(sample)
predictions.append(p)
references.append(l)
# compute metric
rogue = metric.compute(predictions=predictions, references=references, use_stemmer=True)
# print results
print(f"Rogue1: {rogue['rouge1']* 100:2f}%")
print(f"rouge2: {rogue['rouge2']* 100:2f}%")
print(f"rougeL: {rogue['rougeL']* 100:2f}%")
print(f"rougeLsum: {rogue['rougeLsum']* 100:2f}%")
# Rogue1: 50.386161%
# rouge2: 24.842412%
# rougeL: 41.370130%
# rougeLsum: 41.394230%
```
Our PEFT fine-tuned FLAN-T5-XXL achieved a rogue1 score of `50.38%` on the test dataset. For comparison a [full fine-tuning of flan-t5-base achieved a rouge1 score of 47.23](https://www.philschmid.de/fine-tune-flan-t5). That is a `3%` improvements.
It is incredible to see that our LoRA checkpoint is only 84MB small and model achieves better performance than a smaller fully fine-tuned model.
---
Thanks for reading! If you have any questions, feel free to contact me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Setup Deep Learning environment for Hugging Face Transformers with Habana Gaudi on AWS | https://www.philschmid.de/getting-started-habana-gaudi | 2022-06-14 | [
"BERT",
"Habana",
"HuggingFace",
"Optimum"
] | Learn how to setup a Deep Learning Environment for Hugging Face Transformers with Habana Gaudi on AWS using the DL1 instance type. | This blog contains instructions for how to setup a Deep Learning Environment for Habana Gaudi on AWS using the DL1 instance type and Hugging Face libraries like [transformers](https://huggingface.co/docs/transformers/index), [optimum](https://huggingface.co/docs/optimum/index), [datasets](https://huggingface.co/docs/datasets/index). This guide will show you how to set up the development environment on the AWS cloud and get started with Hugging Face Libraries.
This guide covers:
1. [Requirements](#1-requirements)
2. [Create an AWS EC2 instance](#2-create-an-aws-ec2-instance)
3. [Connect to the instance via ssh](#3-connect-to-the-instance-via-ssh)
4. [Use Jupyter Notebook/Lab via ssh](#4-use-jupyter-notebook-lab-via-ssh)
5. [Fine-tune Hugging Face Transformers with Optimum](#5-fine-tune-hugging-face-transformers-with-optimum)
6. [Clean up](#6-clean-up)
Or you can jump to the [Conclusion](#conclusion).
Let's get started! 🚀
## 1. Requirements
Before we can start make sure you have met the following requirements
- AWS Account with quota for [DL1 instance type](https://aws.amazon.com/ec2/instance-types/dl1/)
- [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) installed
- AWS IAM user [configured in CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html) with permission to create and manage ec2 instances
## 2. Create an AWS EC2 instance
To be able to launch an EC2 instance we need to create a `key-pair` and `security-group`, which will be used to access the instance via ssh.
Configure AWS PROFILE and AWS REGION which will be used for the instance
```bash
export AWS_PROFILE=<your-aws-profile>
export AWS_DEFAULT_REGION=<your-aws-region>
```
We create a key pair using the `aws` cli and save the key into a local `.pem` file.
```bash
KEY_NAME=habana
aws ec2 create-key-pair --key-name ${KEY_NAME} --query 'KeyMaterial' --output text > ${KEY_NAME}.pem
chmod 400 ${KEY_NAME}.pem
```
Next we create a security group, which allows ssh access to the instance. We are going to use the default VPC, but this could be adjusted by changing the `vpc-id` in the `create-security-group` command.
```bash
SG_NAME=habana
DEFAULT_VPC_ID=$(aws ec2 describe-vpcs --query 'Vpcs[?isDefault==true].VpcId' --output text)
echo "Default VPC ID: ${DEFAULT_VPC_ID}"
SECURITY_GROUP_ID=$(aws ec2 create-security-group --group-name ${SG_NAME}-sg --description "SG for Habana Deep Learning" --vpc-id ${DEFAULT_VPC_ID} --output text)
echo "Security Group ID: ${SECURITY_GROUP_ID}"
echo $(aws ec2 authorize-security-group-ingress --group-id ${SECURITY_GROUP_ID} --protocol tcp --port 22 --cidr 0.0.0.0/0 --output text)
```
We completed all necessary steps to start our DL1 Habana Gaudi instance in a secure environment. We are going to use the community AMI created and managed by Habana, which is identical to the marketplace. The community AMI doesn't require an opt-in first. if you want to use the official marketplace image you have to subscribe on the UI first and then you can access it with the following command `AMI_ID=$(aws ec2 describe-images --filters "Name=name,Values=* Habana Deep Learning Base AMI (Ubuntu 20.*" --query 'Images[0].ImageId' --output text)`.
```bash
AMI_ID=$(aws ec2 describe-images --filters "Name=name,Values=*habanalabs-base-ubuntu20.04*" --query 'Images[0].ImageId' --output text)
echo "AMI ID: ${AMI_ID}"
INSTANCE_TYPE=dl1.24xlarge
INSTANCE_NAME=habana
aws ec2 run-instances \
--image-id ${AMI_ID} \
--key-name ${KEY_NAME} \
--count 1 \
--instance-type ${INSTANCE_TYPE} \
--security-group-ids ${SECURITY_GROUP_ID} \
--block-device-mappings 'DeviceName=/dev/sda1,Ebs={VolumeSize=150}' \
--tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=${INSTANCE_NAME}-demo}]"
```
_P.S. you can also use the `start_instance.sh` script from [Github repository](https://github.com/philschmid/deep-learning-habana-huggingface) which does all of the steps above._
## 3. Connect to the instance via ssh
After around 45-60 seconds we can connect to the Habana Gaudi instance via ssh. We will use the following command to get the public IP and then ssh into the machine using the earlier created key pair.
```bash
INSTANCE_NAME=habana
PUBLIC_DOMAIN=$(aws ec2 describe-instances --profile sandbox \
--filters Name=tag-value,Values=${INSTANCE_NAME}-demo \
--query 'Reservations[*].Instances[*].PublicDnsName' \
--output text)
ssh -i ${KEY_NAME}.pem ubuntu@${PUBLIC_DOMAIN//[$'\t\r\n ']}
```
Lets see if we can access the Gaudi devices. Habana provides a similar CLI tool like `nvidia-smi` with `hl-smi` command.
You can find more documentation [here](https://docs.habana.ai/en/latest/Management_and_Monitoring/System_Management_Tools_Guide/System_Management_Tools.html).
```bash
hl-smi
```
You should see a similar output to the one below.
![hl-smi](/static/blog/getting-started-habana-gaudi/hl-smi.png)
We can also test if we can allocate the `hpu` device in `PyTorch`. Therefore we will pull the latest docker image with torch installed and run `python3` with the code snippet below. A more detailed guide can be found in [Porting a Simple PyTorch Model to Gaudi](https://docs.habana.ai/en/latest/PyTorch/Migration_Guide/Porting_Simple_PyTorch_Model_to_Gaudi.html).
start docker container with torch installed
```bash
docker run -it --runtime=habana -e HABANA_VISIBLE_DEVICES=all -e OMPI_MCA_btl_vader_single_copy_mechanism=none --cap-add=sys_nice --net=host --ipc=host vault.habana.ai/gaudi-docker/1.5.0/ubuntu20.04/habanalabs/pytorch-installer-1.11.0:latest
```
Start a python3 session with `python3` and execute the code below
```python
import torch_hpu
print(f"device available:{torch_hpu.is_available()}")
print(f"device_count:{torch_hpu.device_count()}")
```
## 4. Use Jupyter Notebook/Lab via ssh
Connecting via ssh works as expected, but who likes to develop inside a terminal? In this section we will learn how to install `Jupyter` and `Jupyter Notebooks/Lab` and how to connect to have a better machine learning environment thant just a terminal. But for this to work we need to add port for fowarding in the ssh connection to be able to open it in the browser.
As frist we need to create a new `ssh` connection with port fowarding to port an from `8888`:
```bash
INSTANCE_NAME=habana
PUBLIC_DOMAIN=$(aws ec2 describe-instances --profile sandbox \
--filters Name=tag-value,Values=${INSTANCE_NAME}-demo \
--query 'Reservations[*].Instances[*].PublicDnsName' \
--output text)
ssh -L 8888:localhost:8888 -i ${KEY_NAME}.pem ubuntu@${PUBLIC_DOMAIN//[$'\t\r\n ']}
```
After we are connected we are again we are again starting our container with a mounted volume to not lose our data later.
```bash
docker run -it --runtime=habana -e HABANA_VISIBLE_DEVICES=all -e OMPI_MCA_btl_vader_single_copy_mechanism=none --cap-add=sys_nice --net=host --ipc=host -v /home/ubuntu:/home/ubuntu -w /home/ubuntu vault.habana.ai/gaudi-docker/1.5.0/ubuntu20.04/habanalabs/pytorch-installer-1.11.0:latest
```
Next and last step is to install and run `jupyter`
```bash
pip install jupyter
jupyter notebook --allow-root
```
You should see a familiar jupyter output with a url to the notebook.
```bash
http://localhost:8888/?token=c7a150a559c3e9d6d48d285f7023a341aaf94dac994d787d
```
We can click on it and a jupyter environment opens in our local browser.
![jupyter](/static/blog/getting-started-habana-gaudi/jupyter.png)
We can now run similar tests as via the terminal. Therefore create a new notebook and run the following code:
```python
import torch_hpu
print(f"device available:{torch_hpu.is_available()}")
print(f"device_count:{torch_hpu.device_count()}")
```
![jupyter_devices](/static/blog/getting-started-habana-gaudi/jupyter_devices.png)
## 5. Fine-tune Hugging Face Transformers with Optimum
Our development environments are set up. Now let's install and test the Hugging Face Transformers on habana. To do this we simply install the [transformers](https://github.com/huggingface/transformers) and [optimum[habana]](https://github.com/huggingface/optimum-habana) packages via `pip`.
```bash
pip install transformers datasets
pip install git+https://github.com/huggingface/optimum-habana.git # workaround until release of optimum-habana
```
After we have installed the packages we can start fine-tuning a transformers model with the `optimum` package. Below you can find a simplified example fine-tuning `bert-base-uncased` model on the `emotion` dataset for `text-classification` task. This is a very simplified example, which only uses 1 Gaudi Processor instead of 8 and the `TrainingArguments` are not optimized.
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
from datasets import load_dataset
from optimum.habana import GaudiTrainer, GaudiTrainingArguments
# load pre-trained model and tokenizer
model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased", num_labels=6)
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
# load dataset
dataset = load_dataset("emotion")
# preprocess dataset
def tokenize_function(examples):
return tokenizer(examples["text"], padding="max_length", truncation=True)
tokenized_datasets = dataset.map(tokenize_function, batched=True)
# define Gaudi Training Arguments
training_args = GaudiTrainingArguments(
output_dir=".",
use_habana=True,
use_lazy_mode=True,
gaudi_config_name="Habana/bert-base-uncased",
per_device_train_batch_size=48
)
# Initialize our Trainer
trainer = GaudiTrainer(
model=model,
args=training_args,
train_dataset=tokenized_datasets["train"],
tokenizer=tokenizer,
)
# Run training
trainer.train()
```
![fine-tuning](/static/blog/getting-started-habana-gaudi/fine-tuning.png)
_We will create a more detailed guide on how to leverage the habana instances in the near future._
## 6. Clean up
To make sure we stop/delete everything we created you can follow the steps below.
1. Terminate the ec2 instance
```Bash
INSTANCE_NAME=habana
aws ec2 terminate-instances --instance-ids $(aws ec2 describe-instances --filters "Name=tag:Name,Values=${INSTANCE_NAME}-demo" --query 'Reservations[*].Instances[*].InstanceId' --output text) \
2>&1 > /dev/null
```
2. Delete security group. _can be delete once the instance is terminated_
```bash
SG_NAME=habana
aws ec2 delete-security-group --group-name ${SG_NAME}-sg
```
3. Delete key pair _can be delete once the instance is terminated_
```bash
KEY_NAME=habana
aws ec2 delete-key-pair --key-name ${KEY_NAME}
rm ${KEY_NAME}.pem
```
## 7. Conclusion
That's it now you can start using Habana for running your Deep Learning Workload with Hugging Face Transformers. We walked through how to set up a development enviroment for Habana Gaudi via the terminal or with a jupyter environment. In addition to this, you can use `vscode` via [Remote Development using SSH](https://code.visualstudio.com/docs/remote/ssh) to connect to your instance and run your code.
The next step is to create an advanced guide for Hugging Face Transformers with Habana Gaudi to learn on how to use distributed training, configure optimized `TrainingArguments` and fine-tune & pre-train transformer models. Stay tuned!🚀
Until then you can check-out more examples in the [optimum-habana](https://github.com/huggingface/optimum-habana/tree/main/examples) respository.
---
Thanks for reading! If you have any questions, feel free to contact me, through [Github](https://github.com/huggingface/transformers), or on the [forum](https://discuss.huggingface.co/c/optimum/59). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Workshop: Enterprise-Scale NLP with Hugging Face & Amazon SageMaker | https://www.philschmid.de/hugginface-sagemaker-workshop | 2021-12-29 | [
"HuggingFace",
"AWS",
"SageMaker"
] | In October and November, we held a workshop series on “Enterprise-Scale NLP with Hugging Face & Amazon SageMaker”. This workshop series consisted out of 3 parts and covers: Getting Started, Going Production & MLOps. | Earlier this year we announced a strategic collaboration with Amazon to make it easier for companies to use Hugging Face Transformers in Amazon SageMaker, and ship cutting-edge Machine Learning features faster. We introduced new Hugging Face Deep Learning Containers (DLCs) to train and deploy Hugging Face Transformers in Amazon SageMaker.
In addition to the Hugging Face Inference DLCs, we created a [Hugging Face Inference Toolkit for SageMaker](https://github.com/aws/sagemaker-huggingface-inference-toolkit). This Inference Toolkit leverages the `pipelines` from the `transformers` library to allow zero-code deployments of models, without requiring any code for pre-or post-processing.
In October and November, we held a workshop series on “**Enterprise-Scale NLP with Hugging Face & Amazon SageMaker**”. This workshop series consisted out of 3 parts and covers:
- Getting Started with Amazon SageMaker: Training your first NLP Transformer model with Hugging Face and deploying it
- Going Production: Deploying, Scaling & Monitoring Hugging Face Transformer models with Amazon SageMaker
- MLOps: End-to-End Hugging Face Transformers with the Hub & SageMaker Pipelines
We recorded all of them so you are now able to do the whole workshop series on your own to enhance your Hugging Face Transformers skills with Amazon SageMaker or vice-versa.
Below you can find all the details of each workshop and how to get started.
⚙ Github Repository: [huggingface-sagemaker-workshop-series](https://github.com/philschmid/huggingface-sagemaker-workshop-series)
📺 Youtube Playlist: [Hugging Face SageMaker Playlist](https://www.youtube.com/playlist?list=PLo2EIpI_JMQtPhGR5Eo2Ab0_Vb89XfhDJ)
_Note: The Repository contains instructions on how to access a temporary AWS, which was available during the workshops. To be able to do the workshop now you need to use your own or your company AWS Account._
In Addition to the workshop we created a fully dedicated [Documentation](https://huggingface.co/docs/sagemaker/main) for Hugging Face and Amazon SageMaker, which includes all the necessary information.
If the workshop is not enough for you we also have 15 additional getting samples [Notebook Github repository](https://github.com/huggingface/notebooks/tree/master/sagemaker), which cover topics like distributed training or leveraging [Spot Instances](https://aws.amazon.com/ec2/spot/?nc1=h_ls&cards.sort-by=item.additionalFields.startDateTime&cards.sort-order=asc).
## Workshop 1: **Getting Started with Amazon SageMaker: Training your first NLP Transformer model with Hugging Face and deploying it**
In Workshop 1 you will learn how to use Amazon SageMaker to train a Hugging Face Transformer model and deploy it afterwards.
- Prepare and upload a test dataset to S3
- Prepare a fine-tuning script to be used with Amazon SageMaker Training jobs
- Launch a training job and store the trained model into S3
- Deploy the model after successful training
⚙ Code Assets: [workshop_1_getting_started_with_amazon_sagemaker](https://github.com/philschmid/huggingface-sagemaker-workshop-series/tree/main/workshop_1_getting_started_with_amazon_sagemaker)
📺 Youtube: [workshop_1_getting_started_with_amazon_sagemaker](https://www.youtube.com/watch?v=pYqjCzoyWyo&list=PLo2EIpI_JMQtPhGR5Eo2Ab0_Vb89XfhDJ&index=6&t=5s&ab_channel=HuggingFace)
---
## Workshop 2: **Going Production: Deploying, Scaling & Monitoring Hugging Face Transformer models with Amazon SageMaker**
In Workshop 2 learn how to use Amazon SageMaker to deploy, scale & monitor your Hugging Face Transformer models for production workloads.
- Run Batch Prediction on JSON files using a Batch Transform
- Deploy a model from [hf.co/models](https://hf.co/models) to Amazon SageMaker and run predictions
- Configure autoscaling for the deployed model
- Monitor the model to see avg. request time and set up alarms
⚙ Code Assets: [workshop_2_going_production](https://github.com/philschmid/huggingface-sagemaker-workshop-series/tree/main/workshop_2_going_production)
📺 Youtube: [workshop_2_going_production](https://www.youtube.com/watch?v=whwlIEITXoY&list=PLo2EIpI_JMQtPhGR5Eo2Ab0_Vb89XfhDJ&index=6&t=61s)
---
## Workshop 3: **MLOps: End-to-End Hugging Face Transformers with the Hub & SageMaker Pipelines**
In Workshop 3 learn how to build an End-to-End MLOps Pipeline for Hugging Face Transformers from training to production using Amazon SageMaker.
We are going to create an automated SageMaker Pipeline which:
- processes a dataset and uploads it to s3
- fine-tunes a Hugging Face Transformer model with the processed dataset
- evaluates the model against an evaluation set
- deploys the model if it performed better than a certain threshold
⚙ Code Assets: [workshop_3_mlops](https://github.com/philschmid/huggingface-sagemaker-workshop-series/tree/main/workshop_3_mlops)
📺 Youtube: [workshop_3_mlops](https://www.youtube.com/watch?v=XGyt8gGwbY0&list=PLo2EIpI_JMQtPhGR5Eo2Ab0_Vb89XfhDJ&index=7)
---
# Next Steps
We are planning to continue our workshops in early 2022 to build solution-oriented applications using Hugging Face Transformers, AWS & Amazon SageMaker. If you have an idea or a certain wish about something we should cover please open a thread on the forum: [https://discuss.huggingface.co/c/sagemaker/17](https://discuss.huggingface.co/c/sagemaker/17).
If you want to learn about Hugging Face Transformers on Amazon SageMaker you can checkout our Amazon SageMaker documentation at: https://huggingface.co/docs/sagemaker/main
Or jump into on of our samples at: https://github.com/huggingface/notebooks/tree/master/sagemaker
---
Thanks for reading. If you have any questions, feel free to contact me, through [Github](https://github.com/huggingface/transformers), or on the [forum](https://discuss.huggingface.co/c/sagemaker/17). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Fine-tune a non-English GPT-2 Model with Huggingface | https://www.philschmid.de/fine-tune-a-non-english-gpt-2-model-with-huggingface | 2020-09-06 | [
"NLP",
"GPT-2",
"Huggingface"
] | Fine-tune non-English, German GPT-2 model with Huggingface on German recipes. Using their Trainer class and Pipeline objects. | Unless you’re living under a rock, you probably have heard about [OpenAI](https://openai.com/)'s GPT-3 language model.
You might also have seen all the crazy demos, where the model writes `JSX`, `HTML` code, or its capabilities in the area
of zero-shot / few-shot learning. [Simon O'Regan](https://twitter.com/Simon_O_Regan) wrote an
[article with excellent demos and projects built on top of GPT-3](https://towardsdatascience.com/gpt-3-demos-use-cases-implications-77f86e540dc1).
A Downside of GPT-3 is its 175 billion parameters, which results in a model size of around 350GB. For comparison, the
biggest implementation of the GPT-2 iteration has 1,5 billion parameters. This is less than 1/116 in size.
In fact, with close to 175B trainable parameters, GPT-3 is much bigger in terms of size in comparison to any other model
else out there. Here is a comparison of the number of parameters of recent popular NLP models, GPT-3 clearly stands out.
![model-comparison](/static/blog/fine-tune-a-non-english-gpt-2-model-with-huggingface/models.svg)
This is all magnificent, but you do not need 175 billion parameters to get good results in `text-generation`.
There are already tutorials on how to fine-tune GPT-2. But a lot of them are obsolete or outdated. In this tutorial, we
are going to use the `transformers` library by [Huggingface](https://huggingface.co/) in their newest version (3.1.0).
We will use the new `Trainer` class and fine-tune our GPT-2 Model with German recipes from
[chefkoch.de](http://chefkoch.de).
You can find everything we are doing in this
[colab notebook.](https://colab.research.google.com/github/philschmid/fine-tune-GPT-2/blob/master/Fine_tune_a_non_English_GPT_2_Model_with_Huggingface.ipynb)
---
## Transformers Library by [Huggingface](https://huggingface.co/)
![/static/blog/fine-tune-a-non-english-gpt-2-model-with-huggingface/transformers-logo](/static/blog/fine-tune-a-non-english-gpt-2-model-with-huggingface/transformers-logo.png)
The [Transformers library](https://github.com/huggingface/transformers) provides state-of-the-art machine learning
architectures like BERT, GPT-2, RoBERTa, XLM, DistilBert, XLNet, T5 for Natural Language Understanding (NLU), and
Natural Language Generation (NLG). It also provides thousands of pre-trained models in 100+ different languages and is
deeply interoperable between PyTorch & TensorFlow 2.0. It enables developers to fine-tune machine learning models for
different NLP-tasks like text classification, sentiment analysis, question-answering, or text generation.
---
## Tutorial
In the tutorial, we fine-tune a German GPT-2 from the [Huggingface model hub](https://huggingface.co/models). As data,
we use the [German Recipes Dataset](https://www.kaggle.com/sterby/german-recipes-dataset), which consists of 12190
german recipes with metadata crawled from [chefkoch.de](http://chefkoch.de/).
We will use the recipe Instructions to fine-tune our GPT-2 model and let us write recipes afterwards that we can cook.
![colab-snippet](/static/blog/fine-tune-a-non-english-gpt-2-model-with-huggingface/colab-snippet.png)
We use a Google Colab with a GPU runtime for this tutorial. If you are not sure how to use a GPU Runtime take a look
[here](https://www.philschmid.de/google-colab-the-free-gpu-tpu-jupyter-notebook-service).
**What are we going to do:**
- load the dataset from Kaggle
- prepare the dataset and build a `TextDataset`
- initialize `Trainer` with `TrainingArguments` and GPT-2 model
- train and save the model
- test the model
You can find everything we do in this
[colab notebook.](https://colab.research.google.com/github/philschmid/fine-tune-GPT-2/blob/master/Fine_tune_a_non_English_GPT_2_Model_with_Huggingface.ipynb)
---
## Load the dataset from Kaggle
As already mentioned in the introduction of the tutorial we use the
"[German Recipes Dataset](https://www.kaggle.com/sterby/german-recipes-dataset)" dataset from Kaggle. The dataset
consists of 12190 german recipes with metadata crawled from [chefkoch.de](http://chefkoch.de/). In this example, we only
use the Instructions of the recipes. We download the dataset by using the "Download" button and upload it to our colab
notebook since it only has a zipped size of 4,7MB.
![kaggle-dataset](/static/blog/fine-tune-a-non-english-gpt-2-model-with-huggingface/kaggle-dataset.png)
```python
#upload files to your colab environment
from google.colab import files
uploaded = files.upload()
#132879_316218_bundle_archive.zip(application/zip) - 4749666 bytes, last modified: 29.8.2020 - 100% done
#Saving 132879_316218_bundle_archive.zip to 132879_316218_bundle_archive.zip
```
After we uploaded the file we use `unzip` to extract the `recipes.json` .
```python
!unzip '132879_316218_bundle_archive.zip'
#Archive: 132879_316218_bundle_archive.zip
#inflating: recipes.json
```
_You also could use the `kaggle` CLI to download the dataset, but be aware you need your Kaggle credentials in the colab
notebook._
```python
kaggle datasets download -d sterby/german-recipes-dataset
```
here an example of a recipe.
```json
{
"Url": "https://www.chefkoch.de/rezepte/2718181424631245/",
"Instructions": "Vorab folgende Bemerkung: Alle Mengen sind Circa-Angaben und können nach Geschmack variiert werden!Das Gemüse putzen und in Stücke schneiden (die Tomaten brauchen nicht geschält zu werden!). Alle Zutaten werden im Mixer püriert, das muss wegen der Mengen in mehreren Partien geschehen, und zu jeder Partie muss auch etwas von der Brühe gegeben werden. Auch das Toastbrot wird mitpüriert, es dient der Bindung. Am Schluss lässt man das \u00d6l bei laufendem Mixer einflie\u00dfen. In einer gro\u00dfen Schüssel alles gut verrühren und für mindestens eine Stunde im Kühlschrank gut durchkühlen lassen.Mit frischem Baguette an hei\u00dfen Tagen ein Hochgenuss.Tipps: Wer mag, kann in kleine Würfel geschnittene Tomate, Gurke und Zwiebel separat dazu reichen.Die Suppe eignet sich hervorragend zum Einfrieren, so dass ich immer diese gro\u00dfe Menge zubereite, um den Arbeitsaufwand gering zu halten.",
"Ingredients": [
"1 kg Strauchtomate(n)",
"1 Gemüsezwiebel(n)",
"1 Salatgurke(n)",
"1 Paprikaschote(n) nach Wahl",
"6 Zehe/n Knoblauch",
"1 Chilischote(n)",
"15 EL Balsamico oder Weinessig",
"6 EL Olivenöl",
"4 Scheibe/n Toastbrot",
"Salz und Pfeffer",
"1 kl. Dose/n Tomate(n), geschälte, oder 1 Pck. pürierte Tomaten",
"1/2Liter Brühe, kalte"
],
"Day": 1,
"Name": "Pilz Stroganoff",
"Year": 2017,
"Month": "July",
"Weekday": "Saturday"
}
```
## Prepare the dataset and build a `TextDataset`
The next step is to extract the instructions from all recipes and build a `TextDataset`. The `TextDataset` is a custom
implementation of the
[Pytroch `Dataset` class](https://pytorch.org/tutorials/beginner/data_loading_tutorial.html#dataset-class) implemented
by the transformers library. If you want to know more about `Dataset` in Pytorch you can check out this
[youtube video](https://www.youtube.com/watch?v=PXOzkkB5eH0&ab_channel=PythonEngineer).
First, we split the `recipes.json` into a `train` and `test` section. Then we extract `Instructions` from the recipes
and write them into a `train_dataset.txt` and `test_dataset.txt`
```python
import re
import json
from sklearn.model_selection import train_test_split
with open('recipes.json') as f:
data = json.load(f)
def build_text_files(data_json, dest_path):
f = open(dest_path, 'w')
data = ''
for texts in data_json:
summary = str(texts['Instructions']).strip()
summary = re.sub(r"\s", " ", summary)
data += summary + " "
f.write(data)
train, test = train_test_split(data,test_size=0.15)
build_text_files(train,'train_dataset.txt')
build_text_files(test,'test_dataset.txt')
print("Train dataset length: "+str(len(train)))
print("Test dataset length: "+ str(len(test)))
#Train dataset length: 10361
#Test dataset length: 1829
```
The next step is to download the tokenizer. We use the tokenizer from the `german-gpt2` model.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("anonymous-german-nlp/german-gpt2")
train_path = 'train_dataset.txt'
test_path = 'test_dataset.txt'
```
Now we can build our `TextDataset`. Therefore we create a `TextDataset` instance with the `tokenizer` and the path to
our datasets. We also create our `data_collator`, which is used in training to form a batch from our dataset.
```python
from transformers import TextDataset,DataCollatorForLanguageModeling
def load_dataset(train_path,test_path,tokenizer):
train_dataset = TextDataset(
tokenizer=tokenizer,
file_path=train_path,
block_size=128)
test_dataset = TextDataset(
tokenizer=tokenizer,
file_path=test_path,
block_size=128)
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer, mlm=False,
)
return train_dataset,test_dataset,data_collator
train_dataset,test_dataset,data_collator = load_dataset(train_path,test_path,tokenizer)
```
---
## Initialize `Trainer` with `TrainingArguments` and GPT-2 model
The [Trainer](https://huggingface.co/transformers/main_classes/trainer.html#transformers.Trainer) class provides an API
for feature-complete training. It is used in most of
the [example scripts](https://huggingface.co/transformers/examples.html) from Huggingface. Before we can instantiate our
`Trainer` we need to download our GPT-2 model and create
[TrainingArguments](https://huggingface.co/transformers/main_classes/trainer.html#transformers.TrainingArguments). The
`TrainingArguments` are used to define the Hyperparameters, which we use in the training process like the
`learning_rate`, `num_train_epochs`, or `per_device_train_batch_size`. You can find a complete list
[here](https://huggingface.co/transformers/main_classes/trainer.html#trainingarguments).
```python
from transformers import Trainer, TrainingArguments, AutoModelWithLMHead
model = AutoModelWithLMHead.from_pretrained("anonymous-german-nlp/german-gpt2")
training_args = TrainingArguments(
output_dir="./gpt2-gerchef", #The output directory
overwrite_output_dir=True, #overwrite the content of the output directory
num_train_epochs=3, # number of training epochs
per_device_train_batch_size=32, # batch size for training
per_device_eval_batch_size=64, # batch size for evaluation
eval_steps = 400, # Number of update steps between two evaluations.
save_steps=800, # after # steps model is saved
warmup_steps=500,# number of warmup steps for learning rate scheduler
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=train_dataset,
eval_dataset=test_dataset,
prediction_loss_only=True,
)
```
---
## Train and Save the model
To train the model we can simply run `trainer.train()`.
```python
trainer.train()
```
After training is done you can save the model by calling `save_model()`. This will save the trained model to our
`output_dir` from our `TrainingArguments`.
```python
trainer.save_model()
```
---
## Test the model
To test the model we use another
[highlight of the transformers library](https://huggingface.co/transformers/main_classes/pipelines.html?highlight=pipelines)
called `pipeline`. [Pipelines](https://huggingface.co/transformers/main_classes/pipelines.html?highlight=pipelines) are
objects that offer a simple API dedicated to several tasks, `text-generation` amongst others.
```python
from transformers import pipeline
chef = pipeline('text-generation',model='./gpt2-gerchef', tokenizer='anonymous-german-nlp/german-gpt2',config={'max_length':800})
result = chef('Zuerst Tomaten')[0]['generated_text']
```
result:
"_Zuerst Tomaten dazu geben und 2 Minuten kochen lassen. Die Linsen ebenfalls in der Brühe anbrühen.Die Tomaten
auspressen. Mit der Butter verrühren. Den Kohl sowie die Kartoffeln andünsten, bis sie weich sind. "_
Well, thats it. We've done it👨🏻🍳. We have successfully fine-tuned our gpt-2 model to write us recipes.
To improve our results we could train it longer and adjust our `TrainingArguments` or enlarge the dataset.
---
You can find everything in this
[colab notebook.](https://colab.research.google.com/github/philschmid/fine-tune-GPT-2/blob/master/Fine_tune_a_non_English_GPT_2_Model_with_Huggingface.ipynb)
Thanks for reading. If you have any questions, feel free to contact me or comment on this article. You can also connect
with me on [Twitter](https://twitter.com/_philschmid) or
[LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
An Amazon SageMaker Inference comparison with Hugging Face Transformers | https://www.philschmid.de/sagemaker-inference-comparison | 2022-05-17 | [
"HuggingFace",
"AWS",
"BERT",
"SageMaker"
] | Learn about the different existing Amazon SageMaker Inference options and and how to use them. | _"Amazon SageMaker is a fully managed machine learning service. With SageMaker, data scientists and developers can quickly and easily build and train machine learning models, and then directly deploy them into a production-ready hosted environment."_ - [AWS Documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/whatis.html)
As of today, Amazon SageMaker offers 4 different inference options with:
- [Real-Time inference](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints.html)
- [Batch Transform](https://docs.aws.amazon.com/sagemaker/latest/dg/batch-transform.html)
- [Asynchronous Inference](https://docs.aws.amazon.com/sagemaker/latest/dg/async-inference.html)
- [Serverless Inference](https://docs.aws.amazon.com/sagemaker/latest/dg/serverless-endpoints.html)
Each of these inference options has different characteristics and use cases. Therefore we have created a table to compare the current existing SageMaker inference in latency, execution period, payload, size, and pricing and getting-started examples on how to use each of the inference options.
**Comparison table**
| Option | latency budget | execution period | max payload size | real-world example | accelerators (GPU) | pricing |
| --------------- | -------------- | ----------------------- | ---------------- | ----------------------- | ------------------ | ------------------------------------------------------------- |
| real-time | milliseconds | constantly | 6MB | route estimation | Yes | up time of the endpoint |
| batch transform | hours | ones a day/week | Unlimited | nightly embedding jobs | Yes | prediction (transform) time |
| async inference | minutes | every few minutes/hours | 1GB | post-call transcription | Yes | up time of the endpoint, can sacle to 0 when there is no load |
| serverless | seconds | every few minutes | 6MB | PoC for classification | No | compute time (serverless) |
**Examples**
You will learn how to:
1. Deploy a Hugging Face Transformers For Real-Time inference.
2. Deploy a Hugging Face Transformers for Batch Transform Inference.
3. Deploy a Hugging Face Transformers for Asynchronous Inference.
4. Deploy a Hugging Face Transformers for Serverless Inference.
---
_If you are going to use Sagemaker in a local environment (not SageMaker Studio or Notebook Instances). You need access to an IAM Role with the required permissions for Sagemaker. You can find [here](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) more about it._
## Permissions
_If you are going to use Sagemaker in a local environment. You need access to an IAM Role with the required permissions for Sagemaker. You can find [here](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) more about it._
```python
!pip install "sagemaker>=2.48.0" --upgrade
import sagemaker
import boto3
sess = sagemaker.Session()
# sagemaker session bucket -> used for uploading data, models and logs
# sagemaker will automatically create this bucket if it not exists
sagemaker_session_bucket=None
if sagemaker_session_bucket is None and sess is not None:
# set to default bucket if a bucket name is not given
sagemaker_session_bucket = sess.default_bucket()
try:
role = sagemaker.get_execution_role()
except ValueError:
iam = boto3.client('iam')
role = iam.get_role(RoleName='sagemaker_execution_role')['Role']['Arn']
sess = sagemaker.Session(default_bucket=sagemaker_session_bucket)
print(f"sagemaker role arn: {role}")
print(f"sagemaker bucket: {sess.default_bucket()}")
print(f"sagemaker session region: {sess.boto_region_name}")
```
## [SageMaker Hugging Face Inference Toolkit](https://github.com/aws/sagemaker-huggingface-inference-toolkit)
The SageMaker Hugging Face Inference Toolkit is an open-source library for serving 🤗 Transformers models on Amazon SageMaker. This library provides default pre-processing, predict and postprocessing for certain 🤗 Transformers models and tasks using the `transformers pipelines`.
The Inference Toolkit accepts inputs in the `inputs` key, and supports additional pipelines `parameters` in the parameters key. You can provide any of the supported kwargs from `pipelines` as `parameters`.
Tasks supported by the Inference Toolkit API include:
- **`text-classification`**
- **`sentiment-analysis`**
- **`token-classification`**
- **`feature-extraction`**
- **`fill-mask`**
- **`summarization`**
- **`translation_xx_to_yy`**
- **`text2text-generation`**
- **`text-generation`**
- **`audio-classificatin`**
- **`automatic-speech-recognition`**
- **`conversational`**
- **`image-classification`**
- **`image-segmentation`**
- **`object-detection`**
- **`table-question-answering`**
- **`zero-shot-classification`**
- **`zero-shot-image-classification`**
See the following request examples for some of the tasks:
**text-classification**
```python
{
"inputs": "Camera - You are awarded a SiPix Digital Camera! call 09061221066 fromm landline. Delivery within 28 days."
}
```
**text-generation parameterized**
```python
{
"inputs": "Hugging Face, the winner of VentureBeat’s Innovation in Natural Language Process/Understanding Award for 2021, is looking to level the playing field. The team, launched by Clément Delangue and Julien Chaumond in 2016, was recognized for its work in democratizing NLP, the global market value for which is expected to hit $35.1 billion by 2026. This week, Google’s former head of Ethical AI Margaret Mitchell joined the team.",
"parameters": {
"repetition_penalty": 4.0,
"length_penalty": 1.5
}
}
```
More documentation and a list of supported tasks can be found in the [documentation](https://huggingface.co/docs/sagemaker/reference#inference-toolkit-api).
## 1. Deploy a Hugging Face Transformers For Real-Time inference.
### What are Amazon SageMaker Real-Time Endpoints?
Real-time inference is ideal for inference workloads where you have real-time, interactive, low latency requirements. You can deploy your model to SageMaker hosting services and get an endpoint that can be used for inference. These endpoints are fully managed and support autoscaling.
**Deploying a model using SageMaker hosting services is a three-step process:**
1. **Create a model in SageMaker** —By creating a model, you tell SageMaker where it can find the model components.
2. **Create an endpoint configuration for an HTTPS endpoint** —You specify the name of one or more models in production variants and the ML compute instances that you want SageMaker to launch to host each production variant.
3. **Create an HTTPS endpoint** —Provide the endpoint configuration to SageMaker. The service launches the ML compute instances and deploys the model or models as specified in the configuration
![endpoint-overview](/static/blog/sagemaker-inference-comparison/sm-endpoint.png)
### Deploy a Hugging Face Transformer from the [Hub](hf.co/models)
Detailed Notebook: [deploy_model_from_hf_hub](https://github.com/huggingface/notebooks/blob/main/sagemaker/11_deploy_model_from_hf_hub/deploy_transformer_model_from_hf_hub.ipynb)
To deploy a model directly from the Hub to SageMaker we need to define 2 environment variables when creating the `HuggingFaceModel` . We need to define:
- `HF_MODEL_ID`: defines the model id, which will be automatically loaded from [huggingface.co/models](http://huggingface.co/models) when creating or SageMaker Endpoint. The 🤗 Hub provides +14 000 models all available through this environment variable.
- `HF_TASK`: defines the task for the used 🤗 Transformers pipeline. A full list of tasks can be find [here](https://huggingface.co/docs/sagemaker/reference#inference-toolkit-api).
```python
from sagemaker.huggingface import HuggingFaceModel
# Hub Model configuration. https://huggingface.co/models
hub = {
'HF_MODEL_ID':'distilbert-base-uncased-distilled-squad', # model_id from hf.co/models
'HF_TASK':'question-answering' # NLP task you want to use for predictions
}
# create Hugging Face Model Class
huggingface_model_rth = HuggingFaceModel(
env=hub, # hugging face hub configuration
role=role, # iam role with permissions to create an Endpoint
transformers_version="4.17", # transformers version used
pytorch_version="1.10", # pytorch version used
py_version="py38", # python version of the DLC
)
# deploy model to SageMaker Inference
predictor_rth = huggingface_model_rth.deploy(
initial_instance_count=1,
instance_type="ml.g4dn.xlarge"
)
```
After model is deployed we can use the `predictor` to send requests.
```python
# example request, you always need to define "inputs"
data = {
"inputs": {
"question": "What is used for inference?",
"context": "My Name is Philipp and I live in Nuremberg. This model is used with sagemaker for inference."
}
}
# request
predictor_rth.predict(data)
```
We can easily delete the endpoint again with the following command:
```python
# delete endpoint
predictor_rth.delete_model()
predictor_rth.delete_endpoint()
```
### Deploy a Hugging Face Transformer from the [Hub](hf.co/models)
Detailed Notebook: [deploy_model_from_s3](https://github.com/huggingface/notebooks/blob/main/sagemaker/10_deploy_model_from_s3/deploy_transformer_model_from_s3.ipynb)
To deploy a model directly from the Hub to SageMaker we need to define 2 environment variables when creating the `HuggingFaceModel` . We need to define:
- `HF_MODEL_ID`: defines the model id, which will be automatically loaded from [huggingface.co/models](http://huggingface.co/models) when creating or SageMaker Endpoint. The 🤗 Hub provides +14 000 models all available through this environment variable.
- `HF_TASK`: defines the task for the used 🤗 Transformers pipeline. A full list of tasks can be find [here](https://huggingface.co/docs/sagemaker/reference#inference-toolkit-api).
```python
from sagemaker.huggingface import HuggingFaceModel
# create Hugging Face Model Class
huggingface_model_rts3 = HuggingFaceModel(
model_data="s3://hf-sagemaker-inference/model.tar.gz", # path to your trained sagemaker model
role=role, # iam role with permissions to create an Endpoint
transformers_version="4.17", # transformers version used
pytorch_version="1.10", # pytorch version used
py_version="py38", # python version of the DLC
)
# deploy model to SageMaker Inference
predictor_rts3 = huggingface_model_rts3.deploy(
initial_instance_count=1,
instance_type="ml.m5.xlarge"
)
```
After model is deployed we can use the `predictor` to send requests.
```python
# example request, you always need to define "inputs"
data = {
"inputs": "The new Hugging Face SageMaker DLC makes it super easy to deploy models in production. I love it!"
}
# request
predictor_rts3.predict(data)
# [{'label': 'POSITIVE', 'score': 0.9996660947799683}]
```
We can easily delete the endpoint again with the following command:
```python
# delete endpoint
predictor_rts3.delete_model()
predictor_rts3.delete_endpoint()
```
## 2. Deploy a Hugging Face Transformers for Batch Transform Inference.
Detailed Notebook: [batch_transform_inference](https://github.com/huggingface/notebooks/blob/main/sagemaker/12_batch_transform_inference/sagemaker-notebook.ipynb)
### What is Amazon SageMaker Batch Transform?
A Batch transform job uses a trained model to get inferences on a dataset and saves these results to an Amazon S3 location that you specify. Similar to real-time hosting it creates a web server that takes in HTTP POST but additionally a Agent. The Agent reads the data from Amazon S3 and sends it to the web server and stores the prediction at the end back to Amazon S3. The benefit of Batch Transform is that the instances are only used during the "job" and stopped afterwards.
![batch-transform](/static/blog/sagemaker-inference-comparison/batch-transform-v2.png)
**Use batch transform when you:**
- Want to get inferences for an entire dataset and index them to serve inferences in real time
- Don't need a persistent endpoint that applications (for example, web or mobile apps) can call to get inferences
- Don't need the subsecond latency that SageMaker hosted endpoints provide
```python
from sagemaker.huggingface import HuggingFaceModel
from sagemaker.s3 import S3Uploader,s3_path_join
dataset_jsonl_file="./tweet_data.jsonl"
# uploads a given file to S3.
input_s3_path = s3_path_join("s3://",sagemaker_session_bucket,"london/batch_transform/input")
output_s3_path = s3_path_join("s3://",sagemaker_session_bucket,"london/batch_transform/output")
s3_file_uri = S3Uploader.upload(dataset_jsonl_file,input_s3_path)
print(f"{dataset_jsonl_file} uploaded to {s3_file_uri}")
# Hub Model configuration. https://huggingface.co/models
hub = {
'HF_MODEL_ID':'cardiffnlp/twitter-roberta-base-sentiment',
'HF_TASK':'text-classification'
}
# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
env=hub, # configuration for loading model from Hub
role=role, # iam role with permissions to create an Endpoint
transformers_version="4.17", # transformers version used
pytorch_version="1.10", # pytorch version used
py_version='py38', # python version used
)
# create Transformer to run our batch job
batch_job = huggingface_model.transformer(
instance_count=1, # number of instances used for running the batch job
instance_type='ml.m5.xlarge',# instance type for the batch job
output_path=output_s3_path, # we are using the same s3 path to save the output with the input
strategy='SingleRecord') # How we are sending the "requests" to the endpoint
# starts batch transform job and uses s3 data as input
batch_job.transform(
data=s3_file_uri, # preprocessed file location on s3
content_type='application/json',# mime-type of the file
split_type='Line') # how the datapoints are split, here lines since it is `.jsonl`
```
## 3. Deploy a Hugging Face Transformers for Asynchronous Inference.
Detailed Notebook: [async_inference_hf_hub](https://github.com/huggingface/notebooks/blob/main/sagemaker/16_async_inference_hf_hub/sagemaker-notebook.ipynb)
### What is Amazon SageMaker Asynchronous Inference?
Amazon SageMaker Asynchronous Inference is a new capability in SageMaker that queues incoming requests and processes them asynchronously. Compared to [Batch Transform](https://docs.aws.amazon.com/sagemaker/latest/dg/batch-transform.html) [Asynchronous Inference](https://docs.aws.amazon.com/sagemaker/latest/dg/async-inference.html) provides immediate access to the results of the inference job rather than waiting for the job to complete.
![async-inference](../../imgs/async-inference.png)
**Whats the difference between batch transform & real-time inference:**
- request will be uploaded to Amazon S3 and the Amazon S3 URI is passed in the request
- are always up and running but can scale to zero to save costs
- responses are also uploaded to Amazon S3 again.
- you can create a Amazon SNS topic to recieve notifications when predictions are finished
```python
from sagemaker.huggingface.model import HuggingFaceModel
from sagemaker.async_inference.async_inference_config import AsyncInferenceConfig
from sagemaker.s3 import s3_path_join
# Hub Model configuration. <https://huggingface.co/models>
hub = {
'HF_MODEL_ID':'distilbert-base-uncased-finetuned-sst-2-english',
'HF_TASK':'text-classification'
}
# create Hugging Face Model Class
huggingface_model_async = HuggingFaceModel(
env=hub, # configuration for loading model from Hub
role=role, # iam role with permissions to create an Endpoint
transformers_version="4.17", # transformers version used
pytorch_version="1.10", # pytorch version used
py_version='py38', # python version used
)
# create async endpoint configuration
async_config = AsyncInferenceConfig(
output_path=s3_path_join("s3://",sagemaker_session_bucket,"async_inference/output") , # Where our results will be stored
# notification_config={
# "SuccessTopic": "arn:aws:sns:us-east-2:123456789012:MyTopic",
# "ErrorTopic": "arn:aws:sns:us-east-2:123456789012:MyTopic",
# }, # Notification configuration
)
# deploy the endpoint endpoint
async_predictor = huggingface_model_async.deploy(
initial_instance_count=1,
instance_type="ml.g4dn.xlarge",
async_inference_config=async_config
)
```
The `predict()` will upload our `data` to Amazon S3 and run inference against it. Since we are using `predict` it will block until the inference is complete.
```python
data = {
"inputs": [
"it 's a charming and often affecting journey .",
"it 's slow -- very , very slow",
"the mesmerizing performances of the leads keep the film grounded and keep the audience riveted .",
"the emotions are raw and will strike a nerve with anyone who 's ever had family trauma ."
]
}
res = async_predictor.predict(data=data)
print(res)
# [{'label': 'POSITIVE', 'score': 0.9998838901519775}, {'label': 'NEGATIVE', 'score': 0.999727189540863}, {'label': 'POSITIVE', 'score': 0.9998838901519775}, {'label': 'POSITIVE', 'score': 0.9994854927062988}]
```
We can easily delete the endpoint again with the following command:
```python
# delete endpoint
async_predictor.delete_model()
async_predictor.delete_endpoint()
```
## 4. Deploy a Hugging Face Transformers for Serverless Inference.
Detailed Notebook: [serverless_inference](https://github.com/huggingface/notebooks/blob/main/sagemaker/19_serverless_inference/sagemaker-notebook.ipynb)
### What is Amazon SageMaker Serverless Inference?
[Amazon SageMaker Serverless Inference](https://docs.aws.amazon.com/sagemaker/latest/dg/serverless-endpoints.html) is a purpose-built inference option that makes it easy for you to deploy and scale ML models. Serverless Inference is ideal for workloads which have idle periods between traffic spurts and can tolerate cold starts. Serverless endpoints automatically launch compute resources and scale them in and out depending on traffic, eliminating the need to choose instance types or manage scaling policies. This takes away the undifferentiated heavy lifting of selecting and managing servers. Serverless Inference integrates with AWS Lambda to offer you high availability, built-in fault tolerance and automatic scaling.
![serverless](/static/blog/sagemaker-inference-comparison/serverless.png)
**Use Severless Inference when you:**
- Want to get started quickly in a cost-effective way
- Don't need the subsecond latency that SageMaker hosted endpoints provide
- proofs-of-concept where cold starts or scalability is not mission-critical
```python
from sagemaker.huggingface.model import HuggingFaceModel
from sagemaker.serverless import ServerlessInferenceConfig
from sagemaker.serializers import DataSerializer
# Hub Model configuration. <https://huggingface.co/models>
hub = {
'HF_MODEL_ID':'facebook/wav2vec2-base-960h',
'HF_TASK':'automatic-speech-recognition',
}
# create Hugging Face Model Class
huggingface_model_sls = HuggingFaceModel(
env=hub, # configuration for loading model from Hub
role=role, # iam role with permissions to create an Endpoint
transformers_version="4.17", # transformers version used
pytorch_version="1.10", # pytorch version used
py_version='py38', # python version used
)
# Specify MemorySizeInMB and MaxConcurrency in the serverless config object
serverless_config = ServerlessInferenceConfig(
memory_size_in_mb=4096, max_concurrency=10,
)
# create a serializer for the data
audio_serializer = DataSerializer(content_type='audio/x-audio') # using x-audio to support multiple audio formats
# deploy the endpoint endpoint
predictor_sls = huggingface_model_sls.deploy(
serverless_inference_config=serverless_config,
serializer=audio_serializer, # serializer for our audio data.
)
```
```python
!wget https://cdn-media.huggingface.co/speech_samples/sample1.flac
audio_path = "sample1.flac"
res = predictor_sls.predict(data=audio_path)
print(res)
# {'text': "GOING ALONG SLUSHY COUNTRY ROADS AND SPEAKING TO DAMP AUDIENCES IN DRAUGHTY SCHOOL ROOMS DAY AFTER DAY FOR A FORTNIGHT HE'LL HAVE TO PUT IN AN APPEARANCE AT SOME PLACE OF WORSHIP ON SUNDAY MORNING AND HE CAN COME TO US IMMEDIATELY AFTERWARDS"}
```
We can easily delete the endpoint again with the following command:
```python
# delete endpoint
predictor_sls.delete_model()
predictor_sls.delete_endpoint()
```
## Conclusion
Every current available inference option has a good use case and allows companies to optimize their machine learning workloads in the best possible way. Not only that with the addition of SageMaker Serverless companies can now quickly built cost-effective proof-of-concepts and move them after success to real-time endpoints by changing 1 line of code.
## Furthermore, this article has shown how easy it is to get started with Hugging Face Transformers on Amazon Sagemaker and how you can integrate state-of-the-art machine learning into existing applications.
Thanks for reading! If you have any questions, feel free to contact me, through [Github](https://github.com/huggingface/transformers), or on the [forum](https://discuss.huggingface.co/c/sagemaker/17). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
MLOps: End-to-End Hugging Face Transformers with the Hub & SageMaker Pipelines | https://www.philschmid.de/mlops-sagemaker-huggingface-transformers | 2021-11-10 | [
"AWS",
"BERT",
"HuggingFace",
"Sagemaker"
] | Learn how to build an End-to-End MLOps Pipeline for Hugging Face Transformers from training to production using Amazon SageMaker. | Welcome to this getting started guide, we will use the new Hugging Face Inference DLCs and Amazon SageMaker Python SDK to create an End-to-End MLOps Pipeline for Hugging Face Transformers from training to production using Amazon SageMaker.
This blog posts demonstrates how to use [SageMaker Pipelines](https://docs.aws.amazon.com/sagemaker/latest/dg/pipelines-sdk.html) to train a [Hugging Face](https://docs.aws.amazon.com/sagemaker/latest/dg/hugging-face.html) Transformer model and deploy it. The SageMaker integration with Hugging Face makes it easy to train and deploy advanced NLP models. A Lambda step in SageMaker Pipelines enables you to easily do lightweight model deployments and other serverless operations.
In this example we are going to fine-tune and deploy a `DistilBERT` model on the `imdb` dataset.
- Prerequisites: Make sure your notebook environment has IAM managed policy `AmazonSageMakerPipelinesIntegrations` as well as `AmazonSageMakerFullAccess`
- [Reference Blog Post: Use a SageMaker Pipeline Lambda step for lightweight model deployments](https://aws.amazon.com/de/blogs/machine-learning/use-a-sagemaker-pipeline-lambda-step-for-lightweight-model-deployments/)
- [Github Repository](https://github.com/philschmid/huggingface-sagemaker-workshop-series/tree/main/workshop_3_mlops)
## Installation & Imports
We'll start by updating the SageMaker SDK, and importing some necessary packages.
```python
!pip install "sagemaker>=2.48.0" --upgrade
```
Import all relevant packages for SageMaker Pipelines.
```python
import boto3
import os
import numpy as np
import pandas as pd
import sagemaker
import sys
import time
from sagemaker.workflow.parameters import ParameterInteger, ParameterFloat, ParameterString
from sagemaker.lambda_helper import Lambda
from sagemaker.sklearn.processing import SKLearnProcessor
from sagemaker.processing import ProcessingInput, ProcessingOutput
from sagemaker.workflow.steps import CacheConfig, ProcessingStep
from sagemaker.huggingface import HuggingFace, HuggingFaceModel
import sagemaker.huggingface
from sagemaker.inputs import TrainingInput
from sagemaker.workflow.steps import TrainingStep
from sagemaker.processing import ScriptProcessor
from sagemaker.workflow.properties import PropertyFile
from sagemaker.workflow.step_collections import CreateModelStep, RegisterModel
from sagemaker.workflow.conditions import ConditionLessThanOrEqualTo,ConditionGreaterThanOrEqualTo
from sagemaker.workflow.condition_step import ConditionStep
from sagemaker.workflow.functions import JsonGet
from sagemaker.workflow.pipeline import Pipeline, PipelineExperimentConfig
from sagemaker.workflow.execution_variables import ExecutionVariables
```
## Permissions
_If you are going to use Sagemaker in a local environment. You need access to an IAM Role with the required permissions for Sagemaker. You can find [here](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) more about it._
```python
import sagemaker
sess = sagemaker.Session()
region = sess.boto_region_name
# sagemaker session bucket -> used for uploading data, models and logs
# sagemaker will automatically create this bucket if it not exists
sagemaker_session_bucket=None
if sagemaker_session_bucket is None and sess is not None:
# set to default bucket if a bucket name is not given
sagemaker_session_bucket = sess.default_bucket()
role = sagemaker.get_execution_role()
sagemaker_session = sagemaker.Session(default_bucket=sagemaker_session_bucket)
print(f"sagemaker role arn: {role}")
print(f"sagemaker bucket: {sagemaker_session.default_bucket()}")
print(f"sagemaker session region: {sagemaker_session.boto_region_name}")
```
## Pipeline Overview
![pipeline](/static/blog/mlops-sagemaker-huggingface-transformers/overview.png)
## Defining the Pipeline
Before defining the pipeline, it is important to parameterize it. SageMaker Pipeline can directly be parameterized, including instance types and counts.
Read more about Parameters in the [documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/build-and-manage-parameters.html)
```python
# S3 prefix where every assets will be stored
s3_prefix = "hugging-face-pipeline-demo"
# s3 bucket used for storing assets and artifacts
bucket = sagemaker_session.default_bucket()
# aws region used
region = sagemaker_session.boto_region_name
# base name prefix for sagemaker jobs (training, processing, inference)
base_job_prefix = s3_prefix
# Cache configuration for workflow
cache_config = CacheConfig(enable_caching=False, expire_after="30d")
# package versions
transformers_version = "4.11.0"
pytorch_version = "1.9.0"
py_version = "py38"
model_id_="distilbert-base-uncased"
dataset_name_="imdb"
model_id = ParameterString(name="ModelId", default_value="distilbert-base-uncased")
dataset_name = ParameterString(name="DatasetName", default_value="imdb")
```
## 1. Processing Step
A SKLearn Processing step is used to invoke a SageMaker Processing job with a custom python script - `preprocessing.py`.
#### Processing Parameter
```python
processing_instance_type = ParameterString(name="ProcessingInstanceType", default_value="ml.c5.2xlarge")
processing_instance_count = ParameterInteger(name="ProcessingInstanceCount", default_value=1)
processing_script = ParameterString(name="ProcessingScript", default_value="./scripts/preprocessing.py")
```
#### Processor
```python
processing_output_destination = f"s3://{bucket}/{s3_prefix}/data"
sklearn_processor = SKLearnProcessor(
framework_version="0.23-1",
instance_type=processing_instance_type,
instance_count=processing_instance_count,
base_job_name=base_job_prefix + "/preprocessing",
sagemaker_session=sagemaker_session,
role=role,
)
step_process = ProcessingStep(
name="ProcessDataForTraining",
cache_config=cache_config,
processor=sklearn_processor,
job_arguments=["--transformers_version",transformers_version,
"--pytorch_version",pytorch_version,
"--model_id",model_id_,
"--dataset_name",dataset_name_],
outputs=[
ProcessingOutput(
output_name="train",
destination=f"{processing_output_destination}/train",
source="/opt/ml/processing/train",
),
ProcessingOutput(
output_name="test",
destination=f"{processing_output_destination}/test",
source="/opt/ml/processing/test",
),
ProcessingOutput(
output_name="validation",
destination=f"{processing_output_destination}/test",
source="/opt/ml/processing/validation",
),
],
code=processing_script,
)
```
## 2. Model Training Step
We use SageMaker's [Hugging Face](https://sagemaker.readthedocs.io/en/stable/frameworks/huggingface/sagemaker.huggingface.html) Estimator class to create a model training step for the Hugging Face [DistilBERT](https://huggingface.co/distilbert-base-uncased) model. Transformer-based models such as the original BERT can be very large and slow to train. DistilBERT, however, is a small, fast, cheap and light Transformer model trained by distilling BERT base. It reduces the size of a BERT model by 40%, while retaining 97% of its language understanding capabilities and being 60% faster.
The Hugging Face estimator also takes hyperparameters as a dictionary. The training instance type and size are pipeline parameters that can be easily varied in future pipeline runs without changing any code.
### Training Parameter
```python
# training step parameters
training_entry_point = ParameterString(name="TrainingEntryPoint", default_value="train.py")
training_source_dir = ParameterString(name="TrainingSourceDir", default_value="./scripts")
training_instance_type = ParameterString(name="TrainingInstanceType", default_value="ml.p3.2xlarge")
training_instance_count = ParameterInteger(name="TrainingInstanceCount", default_value=1)
# hyperparameters, which are passed into the training job
epochs=ParameterString(name="Epochs", default_value="1")
eval_batch_size=ParameterString(name="EvalBatchSize", default_value="32")
train_batch_size=ParameterString(name="TrainBatchSize", default_value="16")
learning_rate=ParameterString(name="LearningRate", default_value="3e-5")
fp16=ParameterString(name="Fp16", default_value="True")
```
### Hugging Face Estimator
```python
huggingface_estimator = HuggingFace(
entry_point=training_entry_point,
source_dir=training_source_dir,
base_job_name=base_job_prefix + "/training",
instance_type=training_instance_type,
instance_count=training_instance_count,
role=role,
transformers_version=transformers_version,
pytorch_version=pytorch_version,
py_version=py_version,
hyperparameters={
'epochs':epochs,
'eval_batch_size': eval_batch_size,
'train_batch_size': train_batch_size,
'learning_rate': learning_rate,
'model_id': model_id,
'fp16': fp16
},
sagemaker_session=sagemaker_session,
)
step_train = TrainingStep(
name="TrainHuggingFaceModel",
estimator=huggingface_estimator,
inputs={
"train": TrainingInput(
s3_data=step_process.properties.ProcessingOutputConfig.Outputs[
"train"
].S3Output.S3Uri
),
"test": TrainingInput(
s3_data=step_process.properties.ProcessingOutputConfig.Outputs[
"test"
].S3Output.S3Uri
),
},
cache_config=cache_config,
)
```
## 3. Model evaluation Step
A ProcessingStep is used to evaluate the performance of the trained model. Based on the results of the evaluation, either the model is created, registered, and deployed, or the pipeline stops.
In the training job, the model was evaluated against the test dataset, and the result of the evaluation was stored in the `model.tar.gz` file saved by the training job. The results of that evaluation are copied into a `PropertyFile` in this ProcessingStep so that it can be used in the ConditionStep.
### Evaluation Parameter
```python
evaluation_script = ParameterString(name="EvaluationScript", default_value="./scripts/evaluate.py")
```
### Evaluator
```python
script_eval = SKLearnProcessor(
framework_version="0.23-1",
instance_type=processing_instance_type,
instance_count=processing_instance_count,
base_job_name=base_job_prefix + "/evaluation",
sagemaker_session=sagemaker_session,
role=role,
)
evaluation_report = PropertyFile(
name="HuggingFaceEvaluationReport",
output_name="evaluation",
path="evaluation.json",
)
step_eval = ProcessingStep(
name="HuggingfaceEvalLoss",
processor=script_eval,
inputs=[
ProcessingInput(
source=step_train.properties.ModelArtifacts.S3ModelArtifacts,
destination="/opt/ml/processing/model",
)
],
outputs=[
ProcessingOutput(
output_name="evaluation",
source="/opt/ml/processing/evaluation",
destination=f"s3://{bucket}/{s3_prefix}/evaluation_report",
),
],
code=evaluation_script,
property_files=[evaluation_report],
cache_config=cache_config,
)
```
## 4. Register the model
The trained model is registered in the Model Registry under a Model Package Group. Each time a new model is registered, it is given a new version number by default. The model is registered in the "Approved" state so that it can be deployed. Registration will only happen if the output of the [6. Condition for deployment](#6.-Condition-for-deployment) is true, i.e, the metrics being checked are within the threshold defined.
```python
model = HuggingFaceModel(
model_data=step_train.properties.ModelArtifacts.S3ModelArtifacts,
role=role,
transformers_version=transformers_version,
pytorch_version=pytorch_version,
py_version=py_version,
sagemaker_session=sagemaker_session,
)
model_package_group_name = "HuggingFaceModelPackageGroup"
step_register = RegisterModel(
name="HuggingFaceRegisterModel",
model=model,
content_types=["application/json"],
response_types=["application/json"],
inference_instances=["ml.g4dn.xlarge", "ml.m5.xlarge"],
transform_instances=["ml.g4dn.xlarge", "ml.m5.xlarge"],
model_package_group_name=model_package_group_name,
approval_status="Approved",
)
```
## 5. Model Deployment
We create a custom step `ModelDeployment` derived from the provided `LambdaStep`. This Step will create a Lambda function and invocate to deploy our model as SageMaker Endpoint.
```python
# custom Helper Step for ModelDeployment
from utils.deploy_step import ModelDeployment
# we will use the iam role from the notebook session for the created endpoint
# this role will be attached to our endpoint and need permissions, e.g. to download assets from s3
sagemaker_endpoint_role=sagemaker.get_execution_role()
step_deployment = ModelDeployment(
model_name=f"{model_id_}-{dataset_name_}",
registered_model=step_register.steps[0],
endpoint_instance_type="ml.g4dn.xlarge",
sagemaker_endpoint_role=sagemaker_endpoint_role,
autoscaling_policy=None,
)
```
## 6. Condition for deployment
For the condition to be `True` and the steps after evaluation to run, the evaluated accuracy of the Hugging Face model must be greater than our `TresholdAccuracy` parameter.
### Condition Parameter
```python
threshold_accuracy = ParameterFloat(name="ThresholdAccuracy", default_value=0.8)
```
### Condition
```python
cond_gte = ConditionGreaterThanOrEqualTo(
left=JsonGet(
step=step_eval,
property_file=evaluation_report,
json_path="eval_accuracy",
),
right=threshold_accuracy,
)
step_cond = ConditionStep(
name="CheckHuggingfaceEvalAccuracy",
conditions=[cond_gte],
if_steps=[step_register, step_deployment],
else_steps=[],
)
```
# Pipeline definition and execution
SageMaker Pipelines constructs the pipeline graph from the implicit definition created by the way pipeline steps inputs and outputs are specified. There's no need to specify that a step is a "parallel" or "serial" step. Steps such as model registration after the condition step are not listed in the pipeline definition because they do not run unless the condition is true. If so, they are run in order based on their specified inputs and outputs.
Each Parameter we defined holds a default value, which can be overwritten before starting the pipeline. [Parameter Documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/build-and-manage-parameters.html)
### Overwriting Parameters
```python
# define parameter which should be overwritten
pipeline_parameters=dict(
ModelId="distilbert-base-uncased",
ThresholdAccuracy=0.7,
Epochs="3",
TrainBatchSize="32",
EvalBatchSize="64",
)
```
### Create Pipeline
```python
pipeline = Pipeline(
name=f"HuggingFaceDemoPipeline",
parameters=[
model_id,
dataset_name,
processing_instance_type,
processing_instance_count,
processing_script,
training_entry_point,
training_source_dir,
training_instance_type,
training_instance_count,
evaluation_script,
threshold_accuracy,
epochs,
eval_batch_size,
train_batch_size,
learning_rate,
fp16
],
steps=[step_process, step_train, step_eval, step_cond],
sagemaker_session=sagemaker_session,
)
```
We can examine the pipeline definition in JSON format. You also can inspect the pipeline graph in SageMaker Studio by going to the page for your pipeline.
```python
import json
json.loads(pipeline.definition())
```
![pipeline](/static/blog/mlops-sagemaker-huggingface-transformers/pipeline.png)
`upsert` creates or updates the pipeline.
```python
pipeline.upsert(role_arn=role)
```
### Run the pipeline
```python
execution = pipeline.start(parameters=pipeline_parameters)
```
```python
execution.wait()
```
## Getting predictions from the endpoint
After the previous cell completes, you can check whether the endpoint has finished deploying.
We can use the `endpoint_name` to create up a `HuggingFacePredictor` object that will be used to get predictions.
```python
from sagemaker.huggingface import HuggingFacePredictor
endpoint_name = f"{model_id}-{dataset_name}"
# check if endpoint is up and running
print(f"https://console.aws.amazon.com/sagemaker/home?region={region}#/endpoints/{endpoint_name}")
hf_predictor = HuggingFacePredictor(endpoint_name,sagemaker_session=sagemaker_session)
```
### Test data
Here are a couple of sample reviews we would like to classify as positive (`pos`) or negative (`neg`). Demonstrating the power of advanced Transformer-based models such as this Hugging Face model, the model should do quite well even though the reviews are mixed.
```python
sentiment_input1 = {"inputs":"Although the movie had some plot weaknesses, it was engaging. Special effects were mind boggling. Can't wait to see what this creative team does next."}
hf_predictor.predict(sentiment_input1)
# [{'label': 'pos', 'score': 0.9690886735916138}]
sentiment_input2 = {"inputs":"There was some good acting, but the story was ridiculous. The other sequels in this franchise were better. It's time to take a break from this IP, but if they switch it up for the next one, I'll check it out."}
hf_predictor.predict(sentiment_input2)
# [{'label': 'neg', 'score': 0.9938264489173889}]
```
## Cleanup Resources
The following cell will delete the resources created by the Lambda function and the Lambda itself.
Deleting other resources such as the S3 bucket and the IAM role for the Lambda function are the responsibility of the notebook user.
```python
sm_client = boto3.client("sagemaker")
# Delete the Lambda function
step_deployment.func.delete()
# Delete the endpoint
hf_predictor.delete_endpoint()
```
# Conclusion
With the help of the Amazon SageMaker Pipelines we were able to create a 100% managed End-to-End Machine Learning Pipeline with out the need think about any administration tasks.
Through the simplicity of SageMaker you don't need huge Ops-teams anymore to manage and scale your machine learning pipelines. You can do it yourself.
---
You can find the code [here](https://github.com/philschmid/huggingface-sagemaker-workshop-series/tree/main/workshop_3_mlops) and feel free open a thread the [forum](https://discuss.huggingface.co/).
Thanks for reading. If you have any questions, feel free to contact me, through [Github](https://github.com/huggingface/transformers), or on the [forum](https://discuss.huggingface.co/c/sagemaker/17). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Speed up BERT inference with Hugging Face Transformers and AWS Inferentia | https://www.philschmid.de/huggingface-bert-aws-inferentia | 2022-03-16 | [
"HuggingFace",
"AWS",
"BERT",
"Inferentia"
] | Learn how to accelerate BERT and Transformers inference using Hugging Face Transformers and AWS Inferentia. | notebook: [sagemaker/18_inferentia_inference](https://github.com/huggingface/notebooks/blob/master/sagemaker/18_inferentia_inference/sagemaker-notebook.ipynb)
The adoption of [BERT](https://huggingface.co/blog/bert-101) and [Transformers](https://huggingface.co/docs/transformers/index) continues to grow. Transformer-based models are now not only achieving state-of-the-art performance in Natural Language Processing but also for [Computer Vision](https://arxiv.org/abs/2010.11929), [Speech](https://arxiv.org/abs/2006.11477), and [Time-Series](https://arxiv.org/abs/2002.06103). 💬 🖼 🎤 ⏳
Companies are now slowly moving from the experimentation and research phase to the production phase in order to use transformer models for large-scale workloads. But by default BERT and its friends are relatively slow, big, and complex models compared to the traditional Machine Learning algorithms. Accelerating Transformers and BERT is and will become an interesting challenge to solve in the future.
AWS's take to solve this challenge was to design a custom machine learning chip designed for optimized inference workload called [AWS Inferentia](https://aws.amazon.com/machine-learning/inferentia/?nc1=h_ls). AWS says that AWS Inferentia _“delivers up to 80% lower cost per inference and up to 2.3X higher throughput than comparable current generation GPU-based Amazon EC2 instances.”_
The real value of AWS Inferentia instances compared to GPU comes through the multiple Neuron Cores available on each device. A Neuron Core is the custom accelerator inside AWS Inferentia. Each Inferentia chip comes with 4x Neuron Cores. This enables you to either load 1 model on each core (for high throughput) or 1 model across all cores (for lower latency).
## Tutorial
In this end-to-end tutorial, you will learn how to speed up BERT inference for text classification with Hugging Face Transformers, Amazon SageMaker, and AWS Inferentia.
You can find the notebook here: [sagemaker/18_inferentia_inference](https://github.com/huggingface/notebooks/blob/master/sagemaker/18_inferentia_inference/sagemaker-notebook.ipynb)
You will learn how to:
- [1. Convert your Hugging Face Transformer to AWS Neuron](#1-convert-your-hugging-face-transformer-to-aws-neuron)
- [2. Create a custom `inference.py` script for `text-classification`](#2-create-a-custom-inferencepy-script-for-text-classification)
- [3. Create and upload the neuron model and inference script to Amazon S3](#3-create-and-upload-the-neuron-model-and-inference-script-to-amazon-s3)
- [4. Deploy a Real-time Inference Endpoint on Amazon SageMaker](#4-deploy-a-real-time-inference-endpoint-on-amazon-sagemaker)
- [5. Run and evaluate Inference performance of BERT on Inferentia](#5-run-and-evaluate-inference-performance-of-bert-on-inferentia)
Let's get started! 🚀
---
_If you are going to use Sagemaker in a local environment (not SageMaker Studio or Notebook Instances), you need access to an IAM Role with the required permissions for Sagemaker. You can find [here](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) more about it._
## 1. Convert your Hugging Face Transformer to AWS Neuron
We are going to use the [AWS Neuron SDK for AWS Inferentia](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/index.html). The Neuron SDK includes a deep learning compiler, runtime, and tools for converting and compiling PyTorch and TensorFlow models to neuron compatible models, which can be run on [EC2 Inf1 instances](https://aws.amazon.com/ec2/instance-types/inf1/).
As a first step, we need to install the [Neuron SDK](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-intro/neuron-install-guide.html) and the required packages.
_Tip: If you are using Amazon SageMaker Notebook Instances or Studio you can go with the `conda_python3` conda kernel._
```python
# Set Pip repository to point to the Neuron repository
!pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com
# Install Neuron PyTorch
!pip install torch-neuron==1.9.1.* neuron-cc[tensorflow] sagemaker>=2.79.0 transformers==4.12.3 --upgrade
```
After we have installed the Neuron SDK we can load and convert our model. Neuron models are converted using `torch_neuron` with its `trace` method similar to `torchscript`. You can find more information in our [documentation](https://huggingface.co/docs/transformers/serialization#torchscript).
To be able to convert our model we first need to select the model we want to use for our text classification pipeline from [hf.co/models](http://hf.co/models). For this example, let's go with [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) but this can be easily adjusted with other BERT-like models.
```python
model_id = "distilbert-base-uncased-finetuned-sst-2-english"
```
At the time of writing, the [AWS Neuron SDK does not support dynamic shapes](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/models/models-inferentia.html#dynamic-shapes), which means that the input size needs to be static for compiling and inference.
In simpler terms, this means that when the model is compiled with e.g. an input of batch size 1 and sequence length of 16, the model can only run inference on inputs with that same shape.
_When using a `t2.medium` instance the compilation takes around 3 minutes_
```python
import os
import tensorflow # to workaround a protobuf version conflict issue
import torch
import torch.neuron
from transformers import AutoTokenizer, AutoModelForSequenceClassification
# load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForSequenceClassification.from_pretrained(model_id, torchscript=True)
# create dummy input for max length 128
dummy_input = "dummy input which will be padded later"
max_length = 128
embeddings = tokenizer(dummy_input, max_length=max_length, padding="max_length",return_tensors="pt")
neuron_inputs = tuple(embeddings.values())
# compile model with torch.neuron.trace and update config
model_neuron = torch.neuron.trace(model, neuron_inputs)
model.config.update({"traced_sequence_length": max_length})
# save tokenizer, neuron model and config for later use
save_dir="tmp"
os.makedirs("tmp",exist_ok=True)
model_neuron.save(os.path.join(save_dir,"neuron_model.pt"))
tokenizer.save_pretrained(save_dir)
model.config.save_pretrained(save_dir)
```
## 2. Create a custom `inference.py` script for `text-classification`
The [Hugging Face Inference Toolkit](https://github.com/aws/sagemaker-huggingface-inference-toolkit) supports zero-code deployments on top of the [pipeline feature](https://huggingface.co/transformers/main_classes/pipelines.html) from 🤗 Transformers. This allows users to deploy Hugging Face transformers without an inference script [[Example](https://github.com/huggingface/notebooks/blob/master/sagemaker/11_deploy_model_from_hf_hub/deploy_transformer_model_from_hf_hub.ipynb)].
Currently, this feature is not supported with AWS Inferentia, which means we need to provide an `inference.py` script for running inference.
_If you would be interested in support for zero-code deployments for Inferentia let us know on the [forum](https://discuss.huggingface.co/c/sagemaker/17)._
---
To use the inference script, we need to create an `inference.py` script. In our example, we are going to overwrite the `model_fn` to load our neuron model and the `predict_fn` to create a text-classification pipeline.
If you want to know more about the `inference.py` script check out this [example](https://github.com/huggingface/notebooks/blob/master/sagemaker/17_custom_inference_script/sagemaker-notebook.ipynb). It explains amongst other things what `model_fn` and `predict_fn` are.
```python
!mkdir code
```
We are using the `NEURONCORE_GROUP_SIZES=1` to make sure that each HTTP worker uses 1 Neuron core to maximize throughput.
```python
%%writefile code/inference.py
import os
from transformers import AutoConfig, AutoTokenizer
import torch
import torch.neuron
# To use one neuron core per worker
os.environ["NEURONCORE_GROUP_SIZES"] = "1"
# saved weights name
AWS_NEURON_TRACED_WEIGHTS_NAME = "neuron_model.pt"
def model_fn(model_dir):
# load tokenizer and neuron model from model_dir
tokenizer = AutoTokenizer.from_pretrained(model_dir)
model = torch.jit.load(os.path.join(model_dir, AWS_NEURON_TRACED_WEIGHTS_NAME))
model_config = AutoConfig.from_pretrained(model_dir)
return model, tokenizer, model_config
def predict_fn(data, model_tokenizer_model_config):
# destruct model, tokenizer and model config
model, tokenizer, model_config = model_tokenizer_model_config
# create embeddings for inputs
inputs = data.pop("inputs", data)
embeddings = tokenizer(
inputs,
return_tensors="pt",
max_length=model_config.traced_sequence_length,
padding="max_length",
truncation=True,
)
# convert to tuple for neuron model
neuron_inputs = tuple(embeddings.values())
# run prediciton
with torch.no_grad():
predictions = model(*neuron_inputs)[0]
scores = torch.nn.Softmax(dim=1)(predictions)
# return dictonary, which will be json serializable
return [{"label": model_config.id2label[item.argmax().item()], "score": item.max().item()} for item in scores]
```
## 3. Create and upload the neuron model and inference script to Amazon S3
Before we can deploy our neuron model to Amazon SageMaker we need to create a `model.tar.gz` archive with all our model artifacts saved into `tmp/`, e.g. `neuron_model.pt` and upload this to Amazon S3.
To do this we need to set up our permissions.
```python
import sagemaker
import boto3
sess = sagemaker.Session()
# sagemaker session bucket -> used for uploading data, models and logs
# sagemaker will automatically create this bucket if it not exists
sagemaker_session_bucket=None
if sagemaker_session_bucket is None and sess is not None:
# set to default bucket if a bucket name is not given
sagemaker_session_bucket = sess.default_bucket()
try:
role = sagemaker.get_execution_role()
except ValueError:
iam = boto3.client('iam')
role = iam.get_role(RoleName='sagemaker_execution_role')['Role']['Arn']
sess = sagemaker.Session(default_bucket=sagemaker_session_bucket)
print(f"sagemaker role arn: {role}")
print(f"sagemaker bucket: {sess.default_bucket()}")
print(f"sagemaker session region: {sess.boto_region_name}")
```
Next, we create our `model.tar.gz`. The `inference.py` script will be placed into a `code/` folder.
```python
# copy inference.py into the code/ directory of the model directory.
!cp -r code/ tmp/code/
# create a model.tar.gz archive with all the model artifacts and the inference.py script.
%cd tmp
!tar zcvf model.tar.gz *
%cd ..
```
Now we can upload our `model.tar.gz` to our session S3 bucket with `sagemaker`.
```python
from sagemaker.s3 import S3Uploader
# create s3 uri
s3_model_path = f"s3://{sess.default_bucket()}/{model_id}"
# upload model.tar.gz
s3_model_uri = S3Uploader.upload(local_path="tmp/model.tar.gz",desired_s3_uri=s3_model_path)
print(f"model artifcats uploaded to {s3_model_uri}")
```
## 4. Deploy a Real-time Inference Endpoint on Amazon SageMaker
After we have uploaded our `model.tar.gz` to Amazon S3 can we create a custom `HuggingfaceModel`. This class will be used to create and deploy our real-time inference endpoint on Amazon SageMaker.
```python
from sagemaker.huggingface.model import HuggingFaceModel
# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
model_data=s3_model_uri, # path to your model and script
role=role, # iam role with permissions to create an Endpoint
transformers_version="4.12", # transformers version used
pytorch_version="1.9", # pytorch version used
py_version='py37', # python version used
)
# Let SageMaker know that we've already compiled the model via neuron-cc
huggingface_model._is_compiled_model = True
# deploy the endpoint endpoint
predictor = huggingface_model.deploy(
initial_instance_count=1, # number of instances
instance_type="ml.inf1.xlarge" # AWS Inferentia Instance
)
```
## 5. Run and evaluate Inference performance of BERT on Inferentia
The `.deploy()` returns an `HuggingFacePredictor` object which can be used to request inference.
```python
data = {
"inputs": "the mesmerizing performances of the leads keep the film grounded and keep the audience riveted .",
}
res = predictor.predict(data=data)
res
```
We managed to deploy our neuron compiled BERT to AWS Inferentia on Amazon SageMaker. Now, let's test its performance. As a dummy load test, we will loop and send 10,000 synchronous requests to our endpoint.
```python
# send 10000 requests
for i in range(10000):
resp = predictor.predict(
data={"inputs": "it 's a charming and often affecting journey ."}
)
```
Let's inspect the performance in cloudwatch.
```python
print(f"https://console.aws.amazon.com/cloudwatch/home?region={sess.boto_region_name}#metricsV2:graph=~(metrics~(~(~'AWS*2fSageMaker~'ModelLatency~'EndpointName~'{predictor.endpoint_name}~'VariantName~'AllTraffic))~view~'timeSeries~stacked~false~region~'{sess.boto_region_name}~start~'-PT5M~end~'P0D~stat~'Average~period~30);query=~'*7bAWS*2fSageMaker*2cEndpointName*2cVariantName*7d*20{predictor.endpoint_name}")
```
The average latency for our BERT model is `5-6ms` for a sequence length of 128.
![bert-latency](/static/blog/huggingface-bert-aws-inferentia/cloudwatch_metrics_bert.png)
### Delete model and endpoint
To clean up, we can delete the model and endpoint.
```python
predictor.delete_model()
predictor.delete_endpoint()
```
## Conclusion
We successfully managed to compile a vanilla Hugging Face Transformers model to an AWS Inferentia compatible Neuron Model. After that we deployed our Neuron model to Amazon SageMaker using the new Hugging Face Inference DLC. We managed to achieve `5-6ms` latency per neuron core, which is faster than CPU in terms of latency, and achieves a higher throughput than GPUs since we ran 4 models in parallel.
If you or you company are currently using a BERT-like Transformer for encoder tasks (text-classification, token-classification, question-answering etc.), and the latency meets your requirements you should switch to AWS Inferentia. This will not only save costs, but can also increase efficiency and performance for your models.
We are planning to do a more detailed case study on cost-performance of transformers in the future, so stay tuned!
Also if you want to learn more about accelerating transformers you should also check out Hugging Face [optimum](https://github.com/huggingface/optimum).
---
Thanks for reading! If you have any questions, feel free to contact me, through [Github](https://github.com/huggingface/transformers), or on the [forum](https://discuss.huggingface.co/c/sagemaker/17). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Image Classification with Hugging Face Transformers and `Keras` | https://www.philschmid.de/image-classification-huggingface-transformers-keras | 2022-01-04 | [
"HuggingFace",
"Keras",
"ViT",
"Tensorflow"
] | Learn how to fine-tune a Vision Transformer for Image Classification Example using vanilla `Keras`, `Transformers`, `Datasets`. | Welcome to this end-to-end Image Classification example using Keras and Hugging Face Transformers. In this demo, we will use the Hugging Faces `transformers` and `datasets` library together with `Tensorflow` & `Keras` to fine-tune a pre-trained vision transformer for image classification.
We are going to use the [EuroSAT](https://paperswithcode.com/dataset/eurosat) dataset for land use and land cover classification. The dataset is based on Sentinel-2 satellite images covering 13 spectral bands and consisting out of 10 classes within total 27,000 labeled and geo-referenced images.
More information for the dataset can be found at the [repository](https://github.com/phelber/eurosat).
We are going to use all of the great Features from the Hugging Face ecosystem like model versioning and experiment tracking as well as all the great features of Keras like Early Stopping and Tensorboard.
## Quick intro: Vision Transformer (ViT) by Google Brain
The Vision Transformer (ViT) is basically BERT, but applied to images. It attains excellent results compared to state-of-the-art convolutional networks. In order to provide images to the model, each image is split into a sequence of fixed-size patches (typically of resolution 16x16 or 32x32), which are linearly embedded. One also adds a [CLS] token at the beginning of the sequence in order to classify images. Next, one adds absolute position embeddings and provides this sequence to the Transformer encoder.
![vision-transformer-architecture](/static/blog/image-classification-huggingface-transformers-keras/vision-transformer-architecture.png)
- Paper: https://arxiv.org/abs/2010.11929
- Official repo (in JAX): https://github.com/google-research/vision_transformer
## Installation
```python
#!pip install "tensorflow==2.6.0"
!pip install transformers "datasets>=1.17.0" tensorboard --upgrade
```
```python
!sudo apt-get install git-lfs
```
This example will use the [Hugging Face Hub](https://huggingface.co/models) as a remote model versioning service. To be able to push our model to the Hub, you need to register on the [Hugging Face](https://huggingface.co/join).
If you already have an account you can skip this step.
After you have an account, we will use the `notebook_login` util from the `huggingface_hub` package to log into our account and store our token (access key) on the disk.
```python
from huggingface_hub import notebook_login
notebook_login()
```
## Setup & Configuration
In this step, we will define global configurations and parameters, which are used across the whole end-to-end fine-tuning process, e.g. `feature extractor` and `model` we will use.
In this example are we going to fine-tune the [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) a Vision Transformer (ViT) pre-trained on ImageNet-21k (14 million images, 21,843 classes) at resolution 224x224.
```python
model_id = "google/vit-base-patch16-224-in21k"
```
You can easily adjust the `model_id` to another Vision Transformer model, e.g. `google/vit-base-patch32-384`
## Dataset & Pre-processing
As Dataset we will use the [EuroSAT](https://paperswithcode.com/dataset/eurosat) an image classification dataset based on satellite images caputred by the Sentinel-2. The dataset consisting out of 10 classes (`Forest`, `River`, `Highway`, `AnnualCrop`,`SeaLake`, `HerbaceousVegetation`, `Industrial`, `Residential`, `PermanentCrop`, `Pasture`) with in total 27,000 labeled and geo-referenced images.
![eurosat-sample](/static/blog/image-classification-huggingface-transformers-keras/eurosat_overview_small.jpeg)
Source: [EuroSAT](https://github.com/phelber/eurosat)
The `EuroSAT` is not yet available as a dataset in the `datasets` library. To be able to create a `Dataset` instance we need to write a small little helper function, which will load our `Dataset` from the filesystem and create the instance to use later for training.
As a first step, we need to download the dataset to our filesystem and unzip it.
```python
!wget https://madm.dfki.de/files/sentinel/EuroSAT.zip
!unzip EuroSAT.zip -d EuroSAT
```
We should now have a directory structure that looks like this:
```bash
EuroSAT/2750/
├── AnnualCrop/
└── AnnualCrop_1.jpg
├── Forest/
└── Forest_1.jpg
├── HerbaceousVegetation/
└── HerbaceousVegetation_1.jpg
├── Highway/
└── Highway_1.jpg
├── Pasture/
└── Pasture_1.jpg
├── PermanentCrop/
└── PermanentCrop_1.jpg
├── Residential/
└── Residential_1.jpg
├── River/
└── River_1.jpg
└── SeaLake/
└── SeaLake_1.jpg
```
At the time of writing this example `datasets` does not yet support loading image dataset from the filesystem. Therefore we create a `create_image_folder_dataset` helper function to load the dataset from the filesystem. This method creates our `_CLASS_NAMES` and our `datasets.Features`. After that, it iterates through the filesystem and creates a `Dataset` instance.
```python
import os
import datasets
def create_image_folder_dataset(root_path):
"""creates `Dataset` from image folder structure"""
# get class names by folders names
_CLASS_NAMES= os.listdir(root_path)
# defines `datasets` features`
features=datasets.Features({
"img": datasets.Image(),
"label": datasets.features.ClassLabel(names=_CLASS_NAMES),
})
# temp list holding datapoints for creation
img_data_files=[]
label_data_files=[]
# load images into list for creation
for img_class in os.listdir(root_path):
for img in os.listdir(os.path.join(root_path,img_class)):
path_=os.path.join(root_path,img_class,img)
img_data_files.append(path_)
label_data_files.append(img_class)
# create dataset
ds = datasets.Dataset.from_dict({"img":img_data_files,"label":label_data_files},features=features)
return ds
```
```python
eurosat_ds = create_image_folder_dataset("EuroSAT/2750")
```
We can display all our classes by inspecting the features of our dataset. Those `labels` can be later used to create a user friendly output when predicting.
```python
img_class_labels = eurosat_ds.features["label"].names
```
## Pre-processing
To train our model we need to convert our "Images" to `pixel_values`. This is done by a [🤗 Transformers Feature Extractor](https://huggingface.co/docs/transformers/master/en/main_classes/feature_extractor#feature-extractor) which allows us to `augment` and convert the images into a 3D Array to be fed into our model.
```python
from transformers import ViTFeatureExtractor
from tensorflow import keras
from tensorflow.keras import layers
feature_extractor = ViTFeatureExtractor.from_pretrained(model_id)
# learn more about data augmentation here: https://www.tensorflow.org/tutorials/images/data_augmentation
data_augmentation = keras.Sequential(
[
layers.Resizing(feature_extractor.size, feature_extractor.size),
layers.Rescaling(1./255),
layers.RandomFlip("horizontal"),
layers.RandomRotation(factor=0.02),
layers.RandomZoom(
height_factor=0.2, width_factor=0.2
),
],
name="data_augmentation",
)
# use keras image data augementation processing
def augmentation(examples):
# print(examples["img"])
examples["pixel_values"] = [data_augmentation(image) for image in examples["img"]]
return examples
# basic processing (only resizing)
def process(examples):
examples.update(feature_extractor(examples['img'], ))
return examples
# we are also renaming our label col to labels to use `.to_tf_dataset` later
eurosat_ds = eurosat_ds.rename_column("label", "labels")
```
process our dataset using `.map` method with `batched=True`.
```python
processed_dataset = eurosat_ds.map(process, batched=True)
processed_dataset
# # augmenting dataset takes a lot of time
# processed_dataset = eurosat_ds.map(augmentation, batched=True)
```
Since our dataset doesn't includes any split we need to `train_test_split` ourself to have an evaluation/test dataset for evaluating the result during and after training.
```python
# test size will be 15% of train dataset
test_size=.15
processed_dataset = processed_dataset.shuffle().train_test_split(test_size=test_size)
```
## Fine-tuning the model using `Keras`
Now that our `dataset` is processed, we can download the pretrained model and fine-tune it. But before we can do this we need to convert our Hugging Face `datasets` Dataset into a `tf.data.Dataset`. For this, we will use the `.to_tf_dataset` method and a `data collator` (Data collators are objects that will form a batch by using a list of dataset elements as input).
## Hyperparameter
```python
from huggingface_hub import HfFolder
import tensorflow as tf
id2label = {str(i): label for i, label in enumerate(img_class_labels)}
label2id = {v: k for k, v in id2label.items()}
num_train_epochs = 5
train_batch_size = 32
eval_batch_size = 32
learning_rate = 3e-5
weight_decay_rate=0.01
num_warmup_steps=0
output_dir=model_id.split("/")[1]
hub_token = HfFolder.get_token() # or your token directly "hf_xxx"
hub_model_id = f'{model_id.split("/")[1]}-euroSat'
fp16=True
# Train in mixed-precision float16
# Comment this line out if you're using a GPU that will not benefit from this
if fp16:
tf.keras.mixed_precision.set_global_policy("mixed_float16")
```
## Converting the dataset to a `tf.data.Dataset`
```python
from transformers import DefaultDataCollator
# Data collator that will dynamically pad the inputs received, as well as the labels.
data_collator = DefaultDataCollator(return_tensors="tf")
# converting our train dataset to tf.data.Dataset
tf_train_dataset = processed_dataset["train"].to_tf_dataset(
columns=['pixel_values'],
label_cols=["labels"],
shuffle=True,
batch_size=train_batch_size,
collate_fn=data_collator)
# converting our test dataset to tf.data.Dataset
tf_eval_dataset = processed_dataset["test"].to_tf_dataset(
columns=['pixel_values'],
label_cols=["labels"],
shuffle=True,
batch_size=eval_batch_size,
collate_fn=data_collator)
```
## Download the pre-trained transformer model and fine-tune it.
```python
from transformers import TFViTForImageClassification, create_optimizer
import tensorflow as tf
# create optimizer wight weigh decay
num_train_steps = len(tf_train_dataset) * num_train_epochs
optimizer, lr_schedule = create_optimizer(
init_lr=learning_rate,
num_train_steps=num_train_steps,
weight_decay_rate=weight_decay_rate,
num_warmup_steps=num_warmup_steps,
)
# load pre-trained ViT model
model = TFViTForImageClassification.from_pretrained(
model_id,
num_labels=len(img_class_labels),
id2label=id2label,
label2id=label2id,
)
# define loss
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
# define metrics
metrics=[
tf.keras.metrics.SparseCategoricalAccuracy(name="accuracy"),
tf.keras.metrics.SparseTopKCategoricalAccuracy(3, name="top-3-accuracy"),
]
# compile model
model.compile(optimizer=optimizer,
loss=loss,
metrics=metrics
)
```
If you want to create you own classification head or if you want to add the augmentation/processing layer to your model, you can directly use the [functional Keras API](https://keras.io/guides/functional_api/). Below you find an example on how you would create a classification head.
```python
# alternatively create Image Classification model using Keras Layer and ViTModel
# here you can also add the processing layers of keras
import tensorflow as tf
from transformers import TFViTModel
base_model = TFViTModel.from_pretrained('google/vit-base-patch16-224-in21k')
# inputs
pixel_values = tf.keras.layers.Input(shape=(3,224,224), name='pixel_values', dtype='float32')
# model layer
vit = base_model.vit(pixel_values)[0]
classifier = tf.keras.layers.Dense(10, activation='softmax', name='outputs')(vit[:, 0, :])
# model
keras_model = tf.keras.Model(inputs=pixel_values, outputs=classifier)
```
## Callbacks
As mentioned in the beginning we want to use the [Hugging Face Hub](https://huggingface.co/models) for model versioning and monitoring. Therefore we want to push our model weights, during training and after training to the Hub to version it.
Additionally, we want to track the performance during training therefore we will push the `Tensorboard` logs along with the weights to the Hub to use the "Training Metrics" Feature to monitor our training in real-time.
```python
import os
from transformers.keras_callbacks import PushToHubCallback
from tensorflow.keras.callbacks import TensorBoard as TensorboardCallback, EarlyStopping
callbacks=[]
callbacks.append(TensorboardCallback(log_dir=os.path.join(output_dir,"logs")))
callbacks.append(EarlyStopping(monitor="val_accuracy",patience=1))
if hub_token:
callbacks.append(PushToHubCallback(output_dir=output_dir,
hub_model_id=hub_model_id,
hub_token=hub_token))
```
![tensorboard](/static/blog/image-classification-huggingface-transformers-keras/tensorboard.png)
## Training
Start training with calling `model.fit`
```python
train_results = model.fit(
tf_train_dataset,
validation_data=tf_eval_dataset,
callbacks=callbacks,
epochs=num_train_epochs,
)
```
As the time of writing this `feature_extractor` doesn't yet support `push_to_hub` thats why we are pushing it manually.
```python
from huggingface_hub import HfApi
api = HfApi()
user = api.whoami(hub_token)
feature_extractor.save_pretrained(output_dir)
api.upload_file(
token=hub_token,
repo_id=f"{user['name']}/{hub_model_id}",
path_or_fileobj=os.path.join(output_dir,"preprocessor_config.json"),
path_in_repo="preprocessor_config.json",
)
```
![model-card](/static/blog/image-classification-huggingface-transformers-keras/model-card.png)
## Run Managed Training using Amazon Sagemaker
If you want to run this examples on Amazon SageMaker to benefit from the Training Platform follow the cells below. I converted the Notebook into a python script [train.py](./scripts/train.py), which accepts same hyperparameter and can we run on SageMaker using the `HuggingFace` estimator
```python
#!pip install sagemaker
```
```python
import sagemaker
sess = sagemaker.Session()
# sagemaker session bucket -> used for uploading data, models and logs
# sagemaker will automatically create this bucket if it not exists
sagemaker_session_bucket=None
if sagemaker_session_bucket is None and sess is not None:
# set to default bucket if a bucket name is not given
sagemaker_session_bucket = sess.default_bucket()
role = sagemaker.get_execution_role()
sess = sagemaker.Session(default_bucket=sagemaker_session_bucket)
print(f"sagemaker role arn: {role}")
print(f"sagemaker bucket: {sess.default_bucket()}")
print(f"sagemaker session region: {sess.boto_region_name}")
```
```python
from sagemaker.huggingface import HuggingFace
# gets role for executing training job
role = sagemaker.get_execution_role()
hyperparameters = {
'model_id': 'google/vit-base-patch16-224-in21k',
'num_train_epochs': 5,
'train_batch_size': 32,
'eval_batch_size': 32,
'learning_rate': 3e-5,
'weight_decay_rate': 0.01,
'num_warmup_steps': 0,
'hub_token': HfFolder.get_token(),
'hub_model_id': 'sagemaker-vit-base-patch16-224-in21k-eurosat',
'fp16': True
}
# creates Hugging Face estimator
huggingface_estimator = HuggingFace(
entry_point='train.py',
source_dir='./scripts',
instance_type='ml.p3.2xlarge',
instance_count=1,
role=role,
transformers_version='4.12.3',
tensorflow_version='2.5.1',
py_version='py36',
hyperparameters = hyperparameters
)
```
upload our raw dataset to s3
```python
from sagemaker.s3 import S3Uploader
dataset_uri = S3Uploader.upload(local_path="EuroSat",desired_s3_uri=f"s3://{sess.default_bucket()}/EuroSat")
```
After the dataset is uploaded we can start the training a pass our `s3_uri` as argument.
```python
# starting the train job
huggingface_estimator.fit({"dataset": dataset_uri})
```
## Conclusion
We managed to successfully fine-tune a Vision Transformer using Transformers and Keras, without any heavy lifting or complex and unnecessary boilerplate code. The new utilities like `.to_tf_dataset` are improving the developer experience of the Hugging Face ecosystem to become more Keras and TensorFlow friendly. Combining those new features with the Hugging Face Hub we get a fully-managed MLOps pipeline for model-versioning and experiment management using Keras callback API.
Additionally, people can now leverage the Keras vision ecosystem together with Transformers, to create their own custom models including preprocessing layers or customer classification heads.
---
You can find the code [here](https://github.com/philschmid/keras-vision-transformer-huggingface) and feel free to open a thread on the [forum](https://discuss.huggingface.co/).
Thanks for reading. If you have any questions, feel free to contact me, through [Github](https://github.com/huggingface/transformers), or on the [forum](https://discuss.huggingface.co/c/sagemaker/17). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Scaling Machine Learning from ZERO to HERO | https://www.philschmid.de/scaling-machine-learning-from-zero-to-hero | 2020-05-08 | [
"AWS",
"Serverless",
"Pytorch"
] | Scale your machine learning models by using AWS Lambda, the Serverless Framework, and PyTorch. I will show you how to build scalable deep learning inference architectures. | The workflow for building machine learning models often ends at the evaluation stage: you have achieved an acceptable
accuracy, which you can test and demonstrate in your "research environment" and “ta-da! Mission accomplished.” But this
is not all! The last - most important step in a machine learning workflow is **deploying** your model to work in
production.
> A model which does not work in production is worth nothing.
A deployed model can be defined as any unit that is seamlessly integrated into a production environment, which can take
in an input and return an output. But one of the main issues companies face with machine learning is finding a way to
deploy these models in such environments.
> [around 40% of failed projects reportedly stalled in development and didn`t get deployed into production](https://medium.com/vsinghbisen/these-are-the-reasons-why-more-than-95-ai-and-ml-projects-fail-cd97f4484ecc).
> [**Source**](https://medium.com/vsinghbisen/these-are-the-reasons-why-more-than-95-ai-and-ml-projects-fail-cd97f4484ecc)
In this post, I will show you step-by-step how to deploy your own custom-trained Pytorch model with AWS Lambda and
integrate it into your production environment with an API. We are going to leverage a simplified serverless computing
approach at scale.
## What is AWS Lambda?
[AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a computing service that lets you run code
without managing servers. It executes your code only when required and scales automatically, from a few requests per day
to thousands per second. You only pay for the compute time you consume - there is no charge when your code is not
running.
![AWS Lambda](/static/blog/scaling-machine-learning-from-zero-to-hero/lambda.png)
[AWs Lambda features](https://aws.amazon.com/de/lambda/features/)
---
## Requirements
This post assumes you have the [Serverless Framework](https://serverless.com/) for deploying an AWS Lambda function
installed and configured, as well as a working Docker Environment. The Serverless Framework helps us develop and deploy
AWS Lambda functions. It’s a CLI that offers structure, automation, and best practices right out of the box. It also
allows you to focus on building sophisticated, event-driven, serverless architectures, comprised of functions and
events.
![Serverless Framework](/static/blog/scaling-machine-learning-from-zero-to-hero/serverless-logo.png)
If you aren’t familiar or haven’t set up the Serverless Framework, take a look at
this [quick-start with the Serverless Framework](https://serverless.com/framework/docs/providers/aws/guide/quick-start/).
By modifying the serverless YAML file, you can connect SQS and, say, create a deep learning pipeline, or even connect it
to a chatbot via AWS Lex.
---
## Tutorial
Before we get started, I'd like to give you some information about the model we are going to use. I trained a Pytorch
image classifier in a
[google colab](https://colab.research.google.com/drive/1Ehy2Tfadj4XASpMDMuNTqHZsXHtDvTmf#scrollTo=rwm_44YP-mdk). If you
want to know what Google Colab is, take a look
[here](https://colab.research.google.com/drive/1Ehy2Tfadj4XASpMDMuNTqHZsXHtDvTmf#scrollTo=Rp7DFKE18vHI). I created a
dataset for classifying [car damage detection](https://www.kaggle.com/philschmid/car-damage-image-classifier) and
fine-tuned a resnet50 image classifier. In this tutorial, we are using `Python3.8` with `Pytorch1.5`.
![demo-images](/static/blog/scaling-machine-learning-from-zero-to-hero/auto.png)
What are we going to do:
- create a Python Lambda function with the Serverless Framework
- add Pytorch to the Lambda Environment
- write a predict function to classify images
- create a S3 bucket, which holds the model and a script to upload it
- configure the Serverless Framework to set up API Gateway for inference
The architecture we are building will look like this.
![Architecture](/static/blog/scaling-machine-learning-from-zero-to-hero/blog-serverless-pytorch.svg)
Now let’s get started with the tutorial.
---
## Create the AWS Lambda function
First, we create our AWS Lambda function by using the Serverless CLI with the `aws-python3` template.
```bash
serverless create --template aws-python3 --path scale-machine-learning-w-pytorch
```
This CLI command will create a new directory containing a `[handler.py](http://handler.py)`, `.gitignore` and
`serverless.yaml` file. The `handler.py` contains some basic boilerplate code.
```python
import json
def hello(event, context):
body = {
"message": "Go Serverless v1.0! Your function executed successfully!",
"input":event
}
response = {
"statusCode": 200,
"body": json.dumps(body)
}
return response
```
---
## Add Python Requirements
Next, we are adding our Python Requirements to our AWS Lambda function. For this, we are using the Serverless plugin
`serverless-python-requirements`. It automatically bundles dependencies from a `requirements.txt` and makes them
available. The `serverless-python-requirements` plugin allows you to even bundle non-pure-Python modules. More on that
[here](https://github.com/UnitedIncome/serverless-python-requirements#readme).
### Installing the plugin
To install the plugin run the following command.
```bash
serverless plugin install -n serverless-python-requirements
```
This will automatically add the plugin to your project's `package.json` and to the plugins section in the
`serverless.yml`.
### Adding Requirements to `requirements.txt`
We have to create a `requirements.txt` file on the root level, with all required Python packages. But you have to be
careful that the deployment package size must not exceed 250MB unzipped. You can find a list of all AWS Lambda
limitations [here](https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-limits.html).
If we would install Pytorch with `pip install torch` the package would be around ~ 470 MB, which is too big to be
deployed in an AWS Lambda Environment. Thus, we are adding the link to the python wheel file (`.whl`) directly in the
`requirements.txt`. For a list of all PyTorch and torchvision packages consider
[this list](https://download.pytorch.org/whl/torch_stable.html).
The `requirements.txt` should look like this.
```python
https://download.pytorch.org/whl/cpu/torch-1.5.0%2Bcpu-cp38-cp38-linux_x86_64.whl
torchvision==0.6.0
requests_toolbelt
```
To make the dependencies even smaller we will employ three techniques available in the `serverless-python-requirements`
plugin:
- `zip` — Compresses the dependencies in the `requirements.txt` in an additional `.requirements.zip` file and
adds`unzip_requirements.py` in the final bundle.
- `slim` — Removes unneeded files and directories such as `*.so`, `*.pyc`, `dist-info`, etc.
- `noDeploy` — Omits certain packages from deployment. We will use the standard list that excludes packages those
already built into Lambda, as well as Tensorboard.
You can see the implementation of it in the section where we are "configuring our `serverless.yaml`" file.
---
## Predict function
Our Lambda function actually consists of 4 functions.
- `load_model_from_s3()` is for loading our model from S3 into memory creating our PyTorch model and a list called
`classes`, which holds the predictable classes.
- `transform_image()` for transforming the incoming pictures into a PyTorch Tensor.
- `get_prediction()`, which uses the transformed Image as input to get a prediction.
- `detect_damage()` is the main function of our Lambda environment.
### Pseudo code
```python
model, classes = load_model_from_s3():
def detect_damage(image):
image_tensor = transform_image(image)
prediction = get_prediction(image_tensor)
return prediction
```
The working program code then looks like this.
```python
try:
import unzip_requirements
except ImportError:
pass
from requests_toolbelt.multipart import decoder
import torch
import torchvision
import torchvision.transforms as transforms
from PIL import Image
from torchvision.models import resnet50
from torch import nn
import boto3
import os
import tarfile
import io
import base64
import json
S3_BUCKET = os.environ['S3_BUCKET'] if 'S3_BUCKET' in os.environ else 'fallback-test-value'
MODEL_PATH = os.environ['MODEL_PATH'] if 'MODEL_PATH' in os.environ else 'fallback-test-value'
s3 = boto3.client('s3')
def load_model_from_s3():
try:
# get object from s3
obj = s3.get_object(Bucket=S3_BUCKET, Key=MODEL_PATH)
# read it in memory
bytestream = io.BytesIO(obj['Body'].read())
# unzip it
tar = tarfile.open(fileobj=bytestream, mode="r:gz")
for member in tar.getmembers():
if member.name.endswith(".txt"):
print("Classes file is :", member.name)
f = tar.extractfile(member)
classes = [classes.decode() for classes in f.read().splitlines()]
print(classes)
if member.name.endswith(".pth"):
print("Model file is :", member.name)
f = tar.extractfile(member)
print("Loading PyTorch model")
# set device to cpu
device = torch.device('cpu')
# create model class
model = resnet50(pretrained=False)
model.fc = nn.Sequential(nn.Linear(2048, 512), nn.ReLU(), nn.Dropout(0.2),
nn.Linear(512, 10),
nn.LogSoftmax(dim=1))
# load downloaded model
model.load_state_dict(torch.load(io.BytesIO(f.read()), map_location=device))
model.eval()
# return classes as list and model
return model, classes
except Exception as e:
raise(e)
model, classes = load_model_from_s3()
def transform_image(image_bytes):
try:
transformations = transforms.Compose([
transforms.Resize(255),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])])
image = Image.open(io.BytesIO(image_bytes))
return transformations(image).unsqueeze(0)
except Exception as e:
print(repr(e))
raise(e)
def get_prediction(image_bytes):
tensor = transform_image(image_bytes=image_bytes)
outputs = model.forward(tensor)
_, y_hat = outputs.max(1)
predicted_idx = y_hat.item()
return classes[predicted_idx]
def detect_damage(event, context):
try:
content_type_header = event['headers']['content-type']
print(event['body'])
body = base64.b64decode(event["body"])
picture = decoder.MultipartDecoder(body, content_type_header).parts[0]
prediction = get_prediction(image_bytes=picture.content)
filename = picture.headers[b'Content-Disposition'].decode().split(';')[1].split('=')[1]
if len(filename) < 4:
filename = picture.headers[b'Content-Disposition'].decode().split(';')[2].split('=')[1]
return {
"statusCode": 200,
"headers": {
'Content-Type': 'application/json',
'Access-Control-Allow-Origin': '*',
"Access-Control-Allow-Credentials": True
},
"body": json.dumps({'file': filename.replace('"', ''), 'predicted': prediction})
}
except Exception as e:
print(repr(e))
return {
"statusCode": 500,
"headers": {
'Content-Type': 'application/json',
'Access-Control-Allow-Origin': '*',
"Access-Control-Allow-Credentials": True
},
"body": json.dumps({"error": repr(e)})
}
```
---
## Adding the trained model to our project
As explained earlier, I trained a car damage detection model in a
[colab notebook](https://colab.research.google.com/drive/1Ehy2Tfadj4XASpMDMuNTqHZsXHtDvTmf#scrollTo=Rp7DFKE18vHI), which
takes an image as input and returns whether the car depicted is `01-whole` or `00-damaged`. I also added some code that
does all the bundling magic for you: If you run the notebook it will create a file called `cardamage.tar.gz` that is
ready to be deployed on AWS. Keep in mind, the size of the Lambda function can be only 250MB unzipped. Thus, we cannot
include our model directly into the function. Instead we need to download it from S3 with the `load_model_from_s3()`.
For this to work, we need a S3 bucket. You can either create one using the management console or with this script.
```python
aws s3api create-bucket --bucket bucket-name --region eu-central-1 --create-bucket-configuration LocationConstraint=eu-central-1
```
After we created the bucket we can upload our model. You can do it either manually or using the provided python script.
```python
import boto3
def upload_model(model_path='', s3_bucket='', key_prefix='', aws_profile='default'):
s3 = boto3.session.Session(profile_name=aws_profile)
client = s3.client('s3')
client.upload_file(model_path, s3_bucket, key_prefix)
```
---
## Configuring the `serverless.yaml`
The next step is to adjust the `serverless.yaml` and including the `custom` Python requirement configuration. We are
going to edit four sections of the `serverless.yaml`, ...
- the `provider` section which holds our runtime and IAM permissions.
- the `custom` section where we configure the `serverless-python-requirements` plugin.
- the `package` section where we exclude folders from production.
- the `function` section where we create the function and define events that invoke our Lambda function.
Have a look at the complete `serverless.yaml`. Don't worry, I will explan all four sections in detail in a minute.
```yaml
service: car-damage-pytorch
provider:
name: aws
runtime: python3.8
region: eu-central-1
timeout: 60
environment:
S3_BUCKET: S3_BUCKET_WHICH_HOLDS_YOUR_MODEL
MODEL_PATH: PATH_TO_FILE_ON_S3
iamRoleStatements:
- Effect: 'Allow'
Action:
- s3:getObject
Resource: arn:aws:s3:::S3_BUCKET_WHICH_HOLDS_YOUR_MODEL/PATH_TO_FILE_ON_S3
custom:
pythonRequirements:
dockerizePip: true
zip: true
slim: true
strip: false
noDeploy:
- docutils
- jmespath
- pip
- python-dateutil
- setuptools
- six
- tensorboard
useStaticCache: true
useDownloadCache: true
cacheLocation: './cache'
package:
individually: false
exclude:
- package.json
- package-log.json
- node_modules/**
- cache/**
- test/**
- __pycache__/**
- .pytest_cache/**
- model/**
functions:
detect_damage:
handler: handler.detect_damage
memorySize: 3008
timeout: 60
events:
- http:
path: detect
method: post
cors: true
plugins:
- serverless-python-requirements
```
### `provider`
In the serverless framework, we define where our function ins deployed in the `provider` section. We are using `aws` as
our provider, other options include `google`, `azure`, and many more. You can find a full list of providers
[here](https://www.serverless.com/framework/docs/providers/).
In addition, we define our runtime, our environment variables, and the permissions our Lambda function has.
As runtime, we are using `python3.8`. For our function to work we need two environment variables `S3_BUCKET` and
`MODEL_PATH`. `S3_BUCKET` contains the name of our S3 Bucket, which we created earlier. `MODEL_PATH` is the path to our
`cardamage.tar.gz` file in the S3 Bucket. We are still missing the permissions to get our model from S3 into our lambda
functions. The `iamRoleStatements`handles the permissions for our lambda function. The permission we need to get our
model from S3 is `s3:getObject` with the ARN
([Amazon Resource Names](https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html)) of our S3 bucket
as a resource.
```yaml
provider:
name: aws
runtime: python3.8
region: eu-central-1
timeout: 60
environment:
S3_BUCKET: S3_BUCKET_WHICH_HOLDS_YOUR_MODEL
MODEL_PATH: PATH_TO_FILE_ON_S3
iamRoleStatements:
- Effect: 'Allow'
Action:
- s3:getObject
Resource: arn:aws:s3:::S3_BUCKET_WHICH_HOLDS_YOUR_MODEL/PATH_TO_FILE_ON_S3
```
### `custom`
In the `custom` section of the `serverless.yml`, we can define configurations for plugins or other scripts. For more
details, refer to this [guide](https://www.serverless.com/framework/docs/providers/aws/guide/variables/). As described
earlier we are using the `serverless-python-requirements` to install and reduce the size of our dependencies at the same
time so we can pack everything into the Lambda runtime. If you want to know how it works you can read
[here](https://www.npmjs.com/package/serverless-python-requirements).
```yaml
custom:
pythonRequirements:
dockerizePip: true
zip: true
slim: true
strip: false
noDeploy:
- docutils
- jmespath
- pip
- python-dateutil
- setuptools
- six
- tensorboard
useStaticCache: true
useDownloadCache: true
cacheLocation: './cache'
```
### `package`
The `package` section can be used to exclude directories/folders from the final package. This offers more control in the
packaging process. You can `exclude` specific folders and files, like `node_modules/`. For more detail take a look
[here](https://www.serverless.com/framework/docs/providers/aws/guide/packaging/).
```yaml
package:
individually: false
exclude:
- package.json
- package-log.json
- node_modules/**
- cache/**
- test/**
- __pycache__/**
- .pytest_cache/**
- model/**
```
### `function`
The fourth and last section - `function` - holds the configuration for our Lambda function. We define the allocated
memory size, a timeout, and the `events` here. In the `events` section of the `function`, we can define a number of
events, which will trigger our lambda function. For our project, we are using `http` which will automatically create an
API Gateway pointing to our function. You can also define events for `sqs`, `cron`, `s3` upload event and many more. You
can find the full list [here](https://www.serverless.com/framework/docs/providers/aws/guide/events/).
```yaml
functions:
detect_damage:
handler: handler.detect_damage
memorySize: 3008
timeout: 60
events:
- http:
path: detect
method: post
cors: true
```
## Deploying the function
In order to deploy the function, we create a `deploy` script in the `package.json`. To deploy our function we need to
have docker up and running.
```json
{
"name": "blog-github-actions-aws-lambda-python",
"description": "",
"version": "0.1.0",
"dependencies": {},
"scripts": {
"deploy": "serverless deploy"
},
"devDependencies": {
"serverless": "^1.67.0",
"serverless-python-requirements": "^5.1.0"
}
}
```
Afterwards, we can run `yarn deploy` or `npm run deploy` to deploy our function. This could take a while as we are
creating a Python environment with docker and installing all our dependencies in it and then uploading everything to
AWS.
After this process is done we should see something like this.
![deployed function](/static/blog/scaling-machine-learning-from-zero-to-hero/deployed.png.png)
## Test and Outcome
To test our lambda function we can use Insomnia, Postman, or any other rest client. Just add an image of a damaged or
whole car as a multipart input in the request. Let´s try it with this image.
![red car](/static/blog/scaling-machine-learning-from-zero-to-hero/0228.jpeg)
![result request](/static/blog/scaling-machine-learning-from-zero-to-hero/insomnia.png)
As a result of our test with the red car we get`01-whole`, which is correct. Also, you can see the complete request took
319ms with a lambda execution time of around 250ms. To be honest this is pretty fast.
If you are going to rebuild the classifier, you have to be careful that the first request could take a while. First off,
the Lambda is unzipping and installing our dependencies and then downloading the model from S3. After this is done once,
the lambda needs around 250ms - 1000ms depending on the input image size for classification.
**The best thing is, our classifier automatically scales up if there are several incoming requests!**
You can scale up to thousands parallel request without any worries.
---
Thanks for reading. You can find the GitHub repository with the complete code
[here](https://github.com/philschmid/scale-machine-learning-w-pytorch) and the colab notebook
[here](https://colab.research.google.com/drive/1Ehy2Tfadj4XASpMDMuNTqHZsXHtDvTmf#scrollTo=vV9cHcwN0MXw). If you have any
questions, feel free to contact me. |
K-Fold as Cross-Validation with a BERT Text-Classification Example | https://www.philschmid.de/k-fold-as-cross-validation-with-a-bert-text-classification-example | 2020-04-07 | [
"Python",
"BERT"
] | Using the K-Fold Cross-Validation to improve your Transformers model validation by the example of BERT Text-Classification | K-fold is a cross-validation method used to estimate the skill of a machine learning model on unseen data. It is
commonly used to validate a model, because it is easy to understand, to implement and results are having a higher
informative value than regular Validation Methods.
Cross-validation is a resampling procedure used to validate machine learning models on a limited data set. The procedure
has a single parameter called K that refers to the number of groups that a given data sample is to be split into, that's
the reason why it´s called K-fold.
The choice of K is usually 5 or 10, but there is no formal rule. As K is getting larger, the resampling subsets are
getting smaller. The number of K also defines how often your Machine Learning Model is trained. Most of the time we
split our data into train/validation sets in 80%-20%, 90%-10% or 70%-30% and train our model once. In cross-validation,
we split our model K times and then train. Be aware that this will result in longer training processes.
## K-Fold steps:
1. Shuffle the dataset.
2. Split the dataset into `K` groups.
3. For each unique group `g`:
1. Take `g` as a test dataset.
2. Take the remaining groups as a training data set.
3. Fit a model on the training set and evaluate it on the test set.
4. Retain the evaluation score and discard the model.
4. Summarize the skill of the model using the sample of model evaluation scores.
![Illustration of K-Fold](/static/blog/k-fold-as-cross-validation-with-a-bert-text-classification-example/k-fold.svg)
**The results of a K-fold cross-validation run are often summarized with the mean of the model scores.**
---
## Scitkit-Learn Example
The example is a simple implementation with scikit-learn and a scalar numpy array .
```python
import numpy as np
from sklearn.model_selection import KFold
# data sample
data = np.array([0.1, 0.2, 0.3, 0.4, 0.5, 0.6])
# prepare cross validation
kfold = KFold(n_splits=3, shuffle=True, random_state=1)
# enumerate splits
for train, test in kfold.split(data):
print('train: %s, test: %s' % (data[train], data[test]))
#>>> Result
#train: [0.1 0.4 0.5 0.6], test: [0.2 0.3]
#train: [0.2 0.3 0.4 0.6], test: [0.1 0.5]
#train: [0.1 0.2 0.3 0.5], test: [0.4 0.6]
```
## Simpletransformers Example (BERT Text-Classification)
The example is an implementation for a `BERT Text-Classification` with
[`simpletransformers` library](https://github.com/ThilinaRajapakse/simpletransformers) and `Scikit-Learn`.
```python
from simpletransformers.classification import ClassificationModel
from sklearn.model_selection import KFold
from sklearn.metrics import accuracy_score
import pandas as pd
# Dataset
dataset = [["Example sentence belonging to class 1", 1],
["Example sentence belonging to class 0", 0],
["Example eval sentence belonging to class 1", 1],
["Example eval sentence belonging to class 0", 0]]
train_data = pd.DataFrame(dataset)
# prepare cross validation
n=5
kf = KFold(n_splits=n, random_state=seed, shuffle=True)
results = []
for train_index, val_index in kf.split(train_data):
# splitting Dataframe (dataset not included)
train_df = train_data.iloc[train_index]
val_df = train_data.iloc[val_index]
# Defining Model
model = ClassificationModel('bert', 'bert-base-uncased')
# train the model
model.train_model(train_df)
# validate the model
result, model_outputs, wrong_predictions = model.eval_model(val_df, acc=accuracy_score)
print(result['acc'])
# append model score
results.append(result['acc'])
print("results",results)
print(f"Mean-Precision: {sum(results) / len(results)}")
#>>> Result
# 0.8535784635587655
# 0.8509520682862771
# 0.855548260013132
# 0.8272010512483574
# 0.8212877792378449
#results [0.8535784635587655,0.8509520682862771,0.855548260013132,
# 0.8272010512483574,0.8212877792378449]
# Mean-Precision: 0.8407520682862771
```
---
## Benefits of K-Fold Cross-Validation
**Using all data**: By using K-fold cross-validation we are using the complete dataset, which is helpful if we have a
small dataset because you split and train your model K times to see its performance instead of wasting X% for your
validation dataset.
**Getting more metrics**: Most of the time you have one result of metrics, but with K-Fold you´ll be able to get K
results of the metric and can have a deeper look into your model's performance.
**Achieving higher precision**: By validating your model against multiple “validation-sets” we get a higher level of
reliability. Let’s imagine the following example: We have 3 speakers and 1500 recordings (500 for each speaker). If we
do a simple train/validation split the result could be very different depending on the split. |
Static Quantization with Hugging Face `optimum` for ~3x latency improvements | https://www.philschmid.de/static-quantization-optimum | 2022-06-07 | [
"BERT",
"OnnxRuntime",
"HuggingFace",
"Quantization"
] | Learn how to do post-training static quantization on Hugging Face Transformers model with `optimum` to achieve up to 3x latency improvements. | notebook: [optimum-static-quantization](https://github.com/philschmid/optimum-static-quantization/blob/master/notebook.ipynb)
In this session, you will learn how to do post-training static quantization on Hugging Face Transformers model. The session will show you how to quantize a DistilBERT model using [Hugging Face Optimum](https://huggingface.co/docs/optimum/index) and [ONNX Runtime](https://onnxruntime.ai/). Hugging Face Optimum is an extension of 🤗 Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on targeted hardware.
Note: Static quantization is currently only supported for CPUs, so we will not be utilizing GPUs / CUDA in this session. By the end of this session, you see how quantization with Hugging Face Optimum can result in significant increase in model latency while keeping almost 100% of the full-precision model. Furthermore, you’ll see how to easily apply some advanced quantization and optimization techniques shown here so that your models take much less of an accuracy hit than they would otherwise.
You will learn how to:
- [1. Setup Development Environment](#1-setup-development-environment)
- [2. Convert a Hugging Face `Transformers` model to ONNX for inference](#2-convert-a-hugging-face-transformers-model-to-onnx-for-inference)
- [3. Configure static quantization & run Calibration of quantization ranges](#3-configure-static-quantization--run-calibration-of-quantization-ranges)
- [4. Use the ORTQuantizer to apply static quantization](#4-use-the-ortquantizer-to-apply-static-quantization)
- [5. Test inference with the quantized model](#5-test-inference-with-the-quantized-model)
- [6. Evaluate the performance and speed](#6-evaluate-the-performance-and-speed)
- [7. Push the quantized model to the Hub](#7-push-the-quantized-model-to-the-hub)
- [8. Load and run inference with a quantized model from the hub](#8-load-and-run-inference-with-a-quantized-model-from-the-hub)
Or you can immediately jump to the [Conclusion](#conclusion).
Let's get started! 🚀
_This tutorial was created and run on a c6i.xlarge AWS EC2 Instance._
---
## 1. Setup Development Environment
Our first step is to install Optimum with the onnxruntime utilities and evaluate.
This will install all required packages including transformers, torch, and onnxruntime. If you are going to use a GPU you can install optimum with pip install optimum[onnxruntime-gpu].
```python
!pip install "optimum[onnxruntime]==1.2.2" evaluate[evaluator] sklearn mkl-include mkl
```
## 2. Convert a Hugging Face `Transformers` model to ONNX for inference
Before we can start qunatizing, we need to convert our vanilla `transformers` model to the `onnx` format. To do this we will use the new [ORTModelForSequenceClassification](https://huggingface.co/docs/optimum/main/en/onnxruntime/modeling_ort#optimum.onnxruntime.ORTModelForSequenceClassification) class calling the `from_pretrained()` method with the `from_transformers` attribute. The model we are using is the [optimum/distilbert-base-uncased-finetuned-banking77](https://huggingface.co/optimum/distilbert-base-uncased-finetuned-banking77) a fine-tuned DistilBERT model on the Banking77 dataset achieving an Accuracy score of `92.5` and as the feature (task) `text-classification`.
```python
from optimum.onnxruntime import ORTModelForSequenceClassification
from transformers import AutoTokenizer
from pathlib import Path
model_id="optimum/distilbert-base-uncased-finetuned-banking77"
dataset_id="banking77"
onnx_path = Path("onnx")
# load vanilla transformers and convert to onnx
model = ORTModelForSequenceClassification.from_pretrained(model_id, from_transformers=True)
tokenizer = AutoTokenizer.from_pretrained(model_id)
# save onnx checkpoint and tokenizer
model.save_pretrained(onnx_path)
tokenizer.save_pretrained(onnx_path)
```
## 3. Configure static quantization & run Calibration of quantization ranges
Post-training static quantization, compared to dynamic quantization not only involves converting the weights from float to int, but also performing an first additional step of feeding the data through the model to compute the distributions of the different activations (calibration ranges). These distributions are then used to determine how the different activations should be quantized at inference time.
Importantly, this additional step allows us to pass quantized values between operations instead of converting these values to floats - and then back to ints - between every operation, resulting in a significant speed-up.
First step is to create our Quantization configuration using `optimum`.
```python
from optimum.onnxruntime import ORTQuantizer
from optimum.onnxruntime.configuration import AutoQuantizationConfig
from onnxruntime.quantization import QuantFormat, QuantizationMode
# create ORTQuantizer and define quantization configuration
quantizer = ORTQuantizer.from_pretrained(model_id, feature=model.pipeline_task)
qconfig = AutoQuantizationConfig.avx512_vnni(
is_static=True,
format=QuantFormat.QOperator,
mode=QuantizationMode.QLinearOps,
per_channel=True,
operators_to_quantize=["MatMul", "Add" ]
)
```
After we have configured our configuration we are going to use the fine-tuning dataset as calibration data to calculate the quantization parameters of activations. The `ORTQuantizer` supports three calibration methods: MinMax, Entropy and Percentile.
We are going to use Percentile as a calibration method. For the session we have already run hyperparameter optimization in advance to find the right `percentiles` to achieve the highest accuracy. Therefore we used the `scripts/run_static_quantizatio_hpo.py` together with `optuna`.
Finding the right calibration method and percentiles is what makes static quantization cost-intensive. Since it can take up to multiple hours to find the right values and there is sadly no rule of thumb.
If you want to learn more about it you should check out the "[INTEGER QUANTIZATION FOR DEEP LEARNING INFERENCE:
PRINCIPLES AND EMPIRICAL EVALUATION](https://arxiv.org/pdf/2004.09602.pdf)" paper
```python
import os
from functools import partial
from optimum.onnxruntime.configuration import AutoCalibrationConfig
def preprocess_fn(ex, tokenizer):
return tokenizer(ex["text"],padding="longest")
# Create the calibration dataset
calibration_samples = 256
calibration_dataset = quantizer.get_calibration_dataset(
dataset_id,
preprocess_function=partial(preprocess_fn, tokenizer=quantizer.tokenizer),
num_samples=calibration_samples,
dataset_split="train",
)
# Create the calibration configuration containing the parameters related to calibration.
calibration_config = AutoCalibrationConfig.percentiles(calibration_dataset, percentile=99.99239080907178)
# Perform the calibration step: computes the activations quantization ranges
shards=16
for i in range(shards):
shard = calibration_dataset.shard(shards, i)
quantizer.partial_fit(
dataset=shard,
calibration_config=calibration_config,
onnx_model_path=onnx_path / "model.onnx",
operators_to_quantize=qconfig.operators_to_quantize,
batch_size=calibration_samples//shards,
use_external_data_format=False,
)
ranges = quantizer.compute_ranges()
# remove temp augmented model again
os.remove("augmented_model.onnx")
```
## 4. Use the ORTQuantizer to apply static quantization
After we have calculated our calibration tensor ranges we can quantize our model using the `ORTQuantizer`.
```python
from utils import create_quantization_preprocessor
# create processor
quantization_preprocessor = create_quantization_preprocessor()
# Quantize the same way we did for dynamic quantization!
quantizer.export(
onnx_model_path=onnx_path / "model.onnx",
onnx_quantized_model_output_path=onnx_path / "model-quantized.onnx",
calibration_tensors_range=ranges,
quantization_config=qconfig,
preprocessor=quantization_preprocessor,
)
```
Lets quickly check the new model size.
```python
import os
# get model file size
size = os.path.getsize(onnx_path / "model.onnx")/(1024*1024)
quantized_model = os.path.getsize(onnx_path / "model-quantized.onnx")/(1024*1024)
print(f"Model file size: {size:.2f} MB")
print(f"Quantized Model file size: {quantized_model:.2f} MB")
# Model file size: 255.68 MB
# Quantized Model file size: 134.32 MB
```
## 5. Test inference with the quantized model
[Optimum](https://huggingface.co/docs/optimum/main/en/pipelines#optimizing-with-ortoptimizer) has built-in support for [transformers pipelines](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#pipelines). This allows us to leverage the same API that we know from using PyTorch and TensorFlow models.
Therefore we can load our quantized model with `ORTModelForSequenceClassification` class and transformers `pipeline`.
```python
from optimum.onnxruntime import ORTModelForSequenceClassification
from transformers import pipeline, AutoTokenizer
model = ORTModelForSequenceClassification.from_pretrained(onnx_path,file_name="model-quantized.onnx")
tokenizer = AutoTokenizer.from_pretrained(onnx_path)
clx = pipeline("text-classification",model=model, tokenizer=tokenizer)
clx("What is the exchange rate like on this app?")
```
## 6. Evaluate the performance and speed
We can now leverage the map function of datasets to iterate over the validation set of squad 2 and run prediction for each data point. Therefore we write a evaluate helper method which uses our pipelines and applies some transformation to work with the squad v2 metric.
```python
from evaluate import evaluator
from datasets import load_dataset
eval = evaluator("text-classification")
eval_dataset = load_dataset("banking77", split="test")
results = eval.compute(
model_or_pipeline=clx,
data=eval_dataset,
metric="accuracy",
input_column="text",
label_column="label",
label_mapping=model.config.label2id,
strategy="simple",
)
print(results)
# {'accuracy': 0.9224025974025974}
print(f"Vanilla model: 92.5%")
print(f"Quantized model: {results['accuracy']*100:.2f}%")
print(f"The quantized model achieves {round(results['accuracy']/0.925,4)*100:.2f}% accuracy of the fp32 model")
# Vanilla model: 92.5%
# Quantized model: 92.24%
# The quantized model achieves 99.72% accuracy of the fp32 model
```
Okay, now let's test the performance (latency) of our quantized model. We are going to use a payload with a sequence length of 128 for the benchmark. To keep it simple, we are going to use a python loop and calculate the avg,mean & p95 latency for our vanilla model and for the quantized model.
```python
from time import perf_counter
import numpy as np
payload="Hello my name is Philipp. I am getting in touch with you because i didn't get a response from you. What do I need to do to get my new card which I have requested 2 weeks ago? Please help me and answer this email in the next 7 days. Best regards and have a nice weekend "*2
print(f'Payload sequence length: {len(tokenizer(payload)["input_ids"])}')
def measure_latency(pipe):
latencies = []
# warm up
for _ in range(10):
_ = pipe(payload)
# Timed run
for _ in range(300):
start_time = perf_counter()
_ = pipe(payload)
latency = perf_counter() - start_time
latencies.append(latency)
# Compute run statistics
time_avg_ms = 1000 * np.mean(latencies)
time_std_ms = 1000 * np.std(latencies)
time_p95_ms = 1000 * np.percentile(latencies,95)
return f"P95 latency (ms) - {time_p95_ms}; Average latency (ms) - {time_avg_ms:.2f} +\- {time_std_ms:.2f};", time_p95_ms
vanilla_clx = pipeline("text-classification",model=model_id)
vanilla_model=measure_latency(vanilla_clx)
quantized_model=measure_latency(clx)
print(f"Vanilla model: {vanilla_model[0]}")
print(f"Quantized model: {quantized_model[0]}")
print(f"Improvement through quantization: {round(vanilla_model[1]/quantized_model[1],2)}x")
```
We managed to accelerate our model latency from 75.69ms to 26.75ms or 2.83x while keeping 99.72% of the accuracy.
![performance](/static/blog/static-quantization-optimum/performance.png)
## 7. Push the quantized model to the Hub
The Optimum model classes like `ORTModelForSequenceClassification` are integrated with the Hugging Face Model Hub, which means you can not only load model from the Hub, but also push your models to the Hub with `push_to_hub()` method. That way we can now save our qunatized model on the Hub to be for example used inside our inference API.
_We have to make sure that we are also saving the `tokenizer` as well as the `config.json` to have a good inference experience._
If you haven't logged into the `huggingface hub` yet you can use the `notebook_login` to do so.
```python
from huggingface_hub import notebook_login
notebook_login()
```
After we have configured our hugging face hub credentials we can push the model.
```python
from transformers import AutoTokenizer
from optimum.onnxruntime import ORTModelForSequenceClassification
tmp_store_directory="onnx_hub_repo"
repository_id="quantized-distilbert-banking77"
model_file_name="model-quantized.onnx"
model.latest_model_name=model_file_name # workaround for PR #214
model.save_pretrained(tmp_store_directory)
quantizer.tokenizer.save_pretrained(tmp_store_directory)
model.push_to_hub(tmp_store_directory,
repository_id=repository_id,
use_auth_token=True
)
```
## 8. Load and run inference with a quantized model from the hub
This step serves as a demonstration of how you could use optimum in your api to load and use our qunatized model.
```python
from optimum.onnxruntime import ORTModelForSequenceClassification
from transformers import pipeline, AutoTokenizer
model = ORTModelForSequenceClassification.from_pretrained("philschmid/quantized-distilbert-banking77")
tokenizer = AutoTokenizer.from_pretrained("philschmid/quantized-distilbert-banking77")
remote_clx = pipeline("text-classification",model=model, tokenizer=tokenizer)
remote_clx("What is the exchange rate like on this app?")
```
## Conclusion
We successfully quantized our vanilla Transformers model with Hugging Face and managed to accelerate our model latency from 75.69ms to 26.75ms or 2.83x while keeping 99.72% of the accuracy.
But I have to say that this isn't a plug and play process you can transfer to any Transformers model, task and dataset. The challenge with static quantization is the calibration of the dataset to find the right ranges which you can use to quantize the model achieve good performance. I ran a hyperparameter search to find the best ranges for our dataset and quantized the model using the [run_static_quantizatio_hpo.py](https://github.com/philschmid/optimum-static-quantization/blob/master/scripts/run_static_quantizatio_hpo.py).
Also, notably to say it that static quantization can only achieve as good as results as dynamic quantization, but will be faster than dynamic quantization. Means that it might always be a good start to first dynamically quantize your model using Optimum and then move to static quantization for further latency and throughput gains. The attached repository also includes an example on how dynamically quantize the model [dynamic_quantization.py](https://github.com/philschmid/optimum-static-quantization/blob/master/scripts/dynamic_quantization.py)
The code can be found in this repository [philschmid/optimum-static-quantization](https://github.com/philschmid/optimum-static-quantization)
---
Thanks for reading! If you have any questions, feel free to contact me, through [Github](https://github.com/huggingface/transformers), or on the [forum](https://discuss.huggingface.co/c/optimum/59). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
How to Set Up a CI/CD Pipeline for AWS Lambda With GitHub Actions and Serverless | https://www.philschmid.de/how-to-set-up-a-ci-cd-pipeline-for-aws-lambda-with-github-actions-and-serverless | 2020-04-01 | [
"AWS",
"Python",
"Github"
] | Automatically deploy your Python function with dependencies in less than five minutes | A CI/CD pipeline functional for your project is incredibly valuable as a developer. Thankfully, it’s not difficult to
set up such a pipeline with Github Actions.
In my previous
article, [Set up a CI/CD Pipeline for your Web app on AWS with Github Actions](https://www.philschmid.de/set-up-a-ci-cd-pipeline-for-your-web-app-on-aws-with-github-actions),
I demonstrated how to set up a CI/CD pipeline for your front end application. This time, I’ll focus on the back end.
I’m going to give you a quick and easy, step-by-step tutorial on setting up a CI/CD Pipeline for AWS Lambda with Github
Actions. For my AWS Lambda, I chose Python for the runtime. I’ll also cover how to include Python packages such
as `scikit-learn` or `pandas`.
---
## TL;DR
If you don't want to read the complete post, just copy the action
and `Serverless` configuration [from this Github repository](https://github.com/philschmid/blog-github-actions-aws-lambda-python) and
add the Github secrets to your repository. If you fail, come back and read the article!
---
## Requirements
This post assumes you have the [Serverless Framework](https://serverless.com/) for deploying an AWS Lambda function
installed a configured, as well as a working Github account and Docker installed. The Serverless Framework helps us
develop and deploy AWS Lambda functions. It’s a CLI that offers structure, automation, and best practices right out of
the box. It also allows you to focus on building sophisticated, event-driven, serverless architectures, comprised of
functions and events.
![Serverless Framework Logo](/static/blog/how-to-set-up-a-ci-cd-pipeline-for-aws-lambda-with-github-actions-and-serverless/serverless-logo.png)
If you aren’t familiar or haven’t set up the Serverless Framework, take a look at
this [quick-start with the Serverless Framework](https://serverless.com/framework/docs/providers/aws/guide/quick-start/).
Now let’s get started with the tutorial.
---
## Create AWS Lambda function
The first thing we are doing is creating our AWS Lambda function by using the Serverless CLI with the `aws-python3`
template.
```bash
serverless create --template aws-python3 --path <your-path>
```
This CLI command will create a new directory with a `handler.py`, `.gitignore` and `serverless.yaml` file in it. The
`handler.py` contains some basic boilerplate code.
```python
import json
def hello(event, context):
body = {
"message": "Go Serverless v1.0! Your function executed successfully!",
"input":event
}
response = {
"statusCode": 200,
"body": json.dumps(body)
}
return response
```
The `serverless.yaml` contains the configuration for deploying the function. if you are interested in what can be
configured with the `serverless.yaml` take a look
[here](https://serverless.com/framework/docs/providers/aws/guide/serverless.yml/).
---
## Add Python Requirements
Next, we are adding our Python Requirements to our AWS Lambda function. For this we are using the Serverless plugin
`serverless-python-requirements` . It automatically bundle requirements from a `requirements.txt` and makes them
available in our `PYTHONPATH`. The `serverless-python-requirements` plugin allows you to even bundle non-pure-Python
modules.
[if you are interested take a look here.](https://github.com/UnitedIncome/serverless-python-requirements#readme)
### Installing the plugin
To install the plugin run the following command.
```bash
serverless plugin install -n serverless-python-requirements
```
This will automatically add the plugin to your project's `package.json` and to the plugins section in the
`serverless.yml`. The next step is adjusting the `serverless.yaml` and including the `custom` Python requirement
configuration. We need this extra configuration because our Github Actions runtime is Node and with the configuration,
we can bundle our python requirements in a docker container.
I also...
- deleted all comments
- add HTTP-Event
- add the `package` section to exclude the `node_modulues` from deploying
- change the region to `eu-central-1`
```yaml
service: <name-of-your-function>
provider:
name: aws
runtime: python3.7
region: eu-central-1
custom:
pythonRequirements:
dockerizePip: true
package:
individually: false
exclude:
- package.json
- package-log.json
- node_modules/**
functions:
get_joke:
handler: handler.get_joke
events:
- http:
path: joke
method: get
plugins:
- serverless-python-requirements
```
### Creating deploy script
In addition to our configuration in the `serverless.yaml` we need to edit the `package.json` and include `serverless` as
`devDependencies`. Additionally, we add a `deploy` script to deploy the function later. We are going to use this deploy
script in the Github Action later.
```json
{
"name": "blog-github-actions-aws-lambda-python",
"description": "",
"version": "0.1.0",
"dependencies": {},
"scripts": {
"deploy": "serverless deploy"
},
"devDependencies": {
"serverless": "^1.67.0",
"serverless-python-requirements": "^5.1.0"
}
}
```
### Adding Requirements to `requirements.txt`
We have to create a `requirements.txt` file on the root level, with all required Python packages. But you have to be
careful that the deployment package size cannot go over 250MB unzipped. You can find a list of all AWS Lambda
limitations [here](https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-limits.html).
Another tip: the `boto3` package is already pre-installed you don´t have to include it in the `requirements.txt`.
For demonstration purposes, i choose the `pyjokes` packages and let the function respond with a joke to all requests. I
include `pyjokes` in the `requirements.txt`
```
pyjokes
```
Afterward i add `pyjokes` to the function in `handler.py` and return a random joke.
```python
import json
import pyjokes
def get_joke(event, context):
body = {
"message": "Go Serverless v1.0! Your function executed successfully!",
"joke":pyjokes.get_joke()
}
response = {
"statusCode": 200,
"body": json.dumps(body)
}
return response
```
---
## Deploy Function manually
Before using Github Actions we are deploying the function by hand with the following command.
**Attention Docker must be up and running.**
```bash
npm run-script deploy
```
In your CLI you should see an output like this.
![CLI output after deployment](/static/blog/how-to-set-up-a-ci-cd-pipeline-for-aws-lambda-with-github-actions-and-serverless/cli.png)
We can test our function by clicking the url provided in the `endpoints` section.
![successful request to lambda](/static/blog/how-to-set-up-a-ci-cd-pipeline-for-aws-lambda-with-github-actions-and-serverless/request1.png)
---
## Create Github Actions
### Create folders & files
The first thing we have to do for our Action is to create the folder `.github` with a folder `workflows` in it on your
project root level. Afterwards create the `deploy-aws-lambda.yaml` file in it.
### Creating the Github Action
Copy this code snippet into the `deploy-aws-lambda.yaml` file.
```yaml
name: deploy-aws-lambda
on:
push:
branches:
- master
jobs:
deploy:
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [10.x]
steps:
- uses: actions/checkout@master
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node@v1
with:
node-version: ${{ matrix.node-version }}
- name: Install Dependencies
run: npm install
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: eu-central-1
- name: Serverless Deploy
run: npm run-script deploy
```
This code snippet describes the Action. The Github Action will be triggered after a `push` on the `master` branch. You
can change this by adjusting the `on` section in the snippet. If you want a different trigger for your action look
[here](https://help.github.com/en/actions/reference/events-that-trigger-workflows).
## Add secrets to your repository
The third and last step is adding secrets to your Github repository. For this Github Action, we need the access key ID
and secret access key from IAM User as secrets called `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`.
If you are not sure how to create an IAM user for the access key ID and secret access key you can read
[here](https://serverless-stack.com/chapters/create-an-iam-user.html).
### Adding the secrets
To add the secrets you have to go to the “settings” tab of your repository.
![Github Repository Navigation](/static/blog/how-to-set-up-a-ci-cd-pipeline-for-aws-lambda-with-github-actions-and-serverless/navigation.png)
Then go to secrets in the left navigation panel.
![Github Repository Settings](/static/blog/how-to-set-up-a-ci-cd-pipeline-for-aws-lambda-with-github-actions-and-serverless/settings.png)
On the secrets page, you can add your 2 secrets `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`.
![Github Repository Secrets](/static/blog/how-to-set-up-a-ci-cd-pipeline-for-aws-lambda-with-github-actions-and-serverless/secrets.png)
## Grab a coffee and enjoy it
We´re almost done. The Last step is to test it. Therefore edit the `handler.py` and push it to the master branch of your
repository.
```python
import json
import pyjokes
def get_joke(event, context):
body = {
"message": "Greetings from Github. Your function is deployed by a Github Actions. Enjoy your joke",
"joke":pyjokes.get_joke()
}
response = {
"statusCode": 200,
"body": json.dumps(body)
}
return response
```
After the push, we can see our Action deploying our AWS Lambda Function.
![successful Github Action](/static/blog/how-to-set-up-a-ci-cd-pipeline-for-aws-lambda-with-github-actions-and-serverless/github-action.png)
After a successful run of our Github Action, we can request our function again to see if it worked.
![successful request to lambda](/static/blog/how-to-set-up-a-ci-cd-pipeline-for-aws-lambda-with-github-actions-and-serverless/request2.png)
---
I created a demo repository with a full example. You can find the repository
[here](https://github.com/philschmid/blog-github-actions-aws-lambda-python). If something is unclear let me know and i
will adjust it. |
Accelerate GPT-J inference with DeepSpeed-Inference on GPUs | https://www.philschmid.de/gptj-deepspeed-inference | 2022-09-13 | [
"GPTJ",
"DeepSpeed",
"HuggingFace",
"Optimization"
] | Learn how to optimize GPT-J for GPU inference with a 1-line of code using Hugging Face Transformers and DeepSpeed. | In this session, you will learn how to optimize GPT-2/GPT-J for Inerence using [Hugging Face Transformers](https://huggingface.co/docs/transformers/index) and [DeepSpeed-Inference](https://www.deepspeed.ai/tutorials/inference-tutorial/). The session will show you how to apply state-of-the-art optimization techniques using [DeepSpeed-Inference](https://www.deepspeed.ai/tutorials/inference-tutorial/).
This session will focus on single GPU inference for GPT-2, GPT-NEO and GPT-J like models
By the end of this session, you will know how to optimize your Hugging Face Transformers models (GPT-2, GPT-J) using DeepSpeed-Inference. We are going to optimize GPT-j 6B for text-generation.
You will learn how to:
1. [Setup Development Environment](#1-setup-development-environment)
2. [Load vanilla GPT-J model and set baseline](#2-load-vanilla-gpt-j-model-and-set-baseline)
3. [Optimize GPT-J for GPU using DeepSpeeds `InferenceEngine`](#3-optimize-gpt-j-for-gpu-using-deepspeeds-inferenceengine)
4. [Evaluate the performance and speed](#4-evaluate-the-performance-and-speed)
Let's get started! 🚀
_This tutorial was created and run on a g4dn.2xlarge AWS EC2 Instance including an NVIDIA T4._
---
## Quick Intro: What is DeepSpeed-Inference
[DeepSpeed-Inference](https://www.deepspeed.ai/tutorials/inference-tutorial/) is an extension of the [DeepSpeed](https://www.deepspeed.ai/) framework focused on inference workloads. [DeepSpeed Inference](https://www.deepspeed.ai/#deepspeed-inference) combines model parallelism technology such as tensor, pipeline-parallelism, with custom optimized cuda kernels.
DeepSpeed provides a seamless inference mode for compatible transformer based models trained using DeepSpeed, Megatron, and HuggingFace. For a list of compatible models please see [here](https://github.com/microsoft/DeepSpeed/blob/master/deepspeed/module_inject/replace_policy.py).
As mentioned DeepSpeed-Inference integrates model-parallelism techniques allowing you to run multi-GPU inference for LLM, like [BLOOM](https://huggingface.co/bigscience/bloom) with 176 billion parameters.
If you want to learn more about DeepSpeed inference:
- [Paper: DeepSpeed Inference: Enabling Efficient Inference of Transformer Models at Unprecedented Scale](https://arxiv.org/pdf/2207.00032.pdf)
- [Blog: Accelerating large-scale model inference and training via system optimizations and compression](https://www.microsoft.com/en-us/research/blog/deepspeed-accelerating-large-scale-model-inference-and-training-via-system-optimizations-and-compression/)
## 1. Setup Development Environment
Our first step is to install Deepspeed, along with PyTorch, Transfromers and some other libraries. Running the following cell will install all the required packages.
_Note: You need a machine with a GPU and a compatible CUDA installed. You can check this by running `nvidia-smi` in your terminal. If your setup is correct, you should get statistics about your GPU._
```python
!pip install torch==1.11.0 torchvision==0.12.0 --extra-index-url https://download.pytorch.org/whl/cu113 --upgrade -q
# !pip install deepspeed==0.7.2 --upgrade -q
!pip install git+https://github.com/microsoft/DeepSpeed.git@ds-inference/support-large-token-length --upgrade
!pip install transformers[sentencepiece]==4.21.2 accelerate --upgrade -q
```
Before we start. Let's make sure all packages are installed correctly.
```python
import re
import torch
# check deepspeed installation
report = !python3 -m deepspeed.env_report
r = re.compile('.*ninja.*OKAY.*')
assert any(r.match(line) for line in report) == True, "DeepSpeed Inference not correct installed"
# check cuda and torch version
torch_version, cuda_version = torch.__version__.split("+")
torch_version = ".".join(torch_version.split(".")[:2])
cuda_version = f"{cuda_version[2:4]}.{cuda_version[4:]}"
r = re.compile(f'.*torch.*{torch_version}.*')
assert any(r.match(line) for line in report) == True, "Wrong Torch version"
r = re.compile(f'.*cuda.*{cuda_version}.*')
assert any(r.match(line) for line in report) == True, "Wrong Cuda version"
```
## 2. Load vanilla GPT-J model and set baseline
After we set up our environment, we create a baseline for our model. We use the [EleutherAI/gpt-j-6B](https://huggingface.co/EleutherAI/gpt-j-6B), a GPT-J 6B was trained on the [Pile](https://pile.eleuther.ai/), a large-scale curated dataset created by [EleutherAI](https://www.eleuther.ai/). This model was trained for 402 billion tokens over 383,500 steps on TPU v3-256 pod. It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token correctly.
To create our baseline, we load the model with `transformers` and run inference.
_Note: We created a [separate repository](https://huggingface.co/philschmid/gpt-j-6B-fp16-sharded) containing sharded `fp16` weights to make it easier to load the models on smaller CPUs by using the `device_map` feature to automatically place sharded checkpoints on GPU. Learn more [here](https://huggingface.co/docs/accelerate/main/en/big_modeling#accelerate.cpu_offload)_
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
# Model Repository on huggingface.co
model_id = "philschmid/gpt-j-6B-fp16-sharded"
# Load Model and Tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id)
# we use device_map auto to automatically place all shards on the GPU to save CPU memory
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto")
print(f"model is loaded on device {model.device.type}")
# model is loaded on device cuda
```
Lets run some inference.
```python
payload = "Hello my name is Philipp. I am getting in touch with you because i didn't get a response from you. What do I need to do to get my new card which I have requested 2 weeks ago? Please help me and answer this email in the next 7 days. Best regards and have a nice weekend but it"
input_ids = tokenizer(payload,return_tensors="pt").input_ids.to(model.device)
print(f"input payload: \n \n{payload}")
logits = model.generate(input_ids, do_sample=True, num_beams=1, min_length=128, max_new_tokens=128)
print(f"prediction: \n \n {tokenizer.decode(logits[0].tolist()[len(input_ids[0]):])}")
# input payload:
# Hello my name is Philipp. I am getting in touch with you because i didn't get a response from you. What do I need to do to get my new card which I have requested 2 weeks ago? Please help me and answer this email in the next 7 days. Best regards and have a nice weekend but it
# prediction:
# 's Friday evening for the British and you can feel that coming in on top of a Friday, please try to spend a quiet time tonight. Thankyou, Philipp
```
Create a latency baseline we use the `measure_latency` function, which implements a simple python loop to run inference and calculate the avg, mean & p95 latency for our model.
```python
from time import perf_counter
import numpy as np
import transformers
# hide generation warnings
transformers.logging.set_verbosity_error()
def measure_latency(model, tokenizer, payload, generation_args={},device=model.device):
input_ids = tokenizer(payload, return_tensors="pt").input_ids.to(device)
latencies = []
# warm up
for _ in range(2):
_ = model.generate(input_ids, **generation_args)
# Timed run
for _ in range(10):
start_time = perf_counter()
_ = model.generate(input_ids, **generation_args)
latency = perf_counter() - start_time
latencies.append(latency)
# Compute run statistics
time_avg_ms = 1000 * np.mean(latencies)
time_std_ms = 1000 * np.std(latencies)
time_p95_ms = 1000 * np.percentile(latencies,95)
return f"P95 latency (ms) - {time_p95_ms}; Average latency (ms) - {time_avg_ms:.2f} +\- {time_std_ms:.2f};", time_p95_ms
```
We are going to use greedy search as decoding strategy and will generate 128 new tokens with 128 tokens as input.
```python
payload="Hello my name is Philipp. I am getting in touch with you because i didn't get a response from you. What do I need to do to get my new card which I have requested 2 weeks ago? Please help me and answer this email in the next 7 days. Best regards and have a nice weekend but it"*2
print(f'Payload sequence length is: {len(tokenizer(payload)["input_ids"])}')
# generation arguments
generation_args = dict(
do_sample=False,
num_beams=1,
min_length=128,
max_new_tokens=128
)
vanilla_results = measure_latency(model,tokenizer,payload,generation_args)
print(f"Vanilla model: {vanilla_results[0]}")
# Payload sequence length is: 128
# Vanilla model: P95 latency (ms) - 8985.898722249989; Average latency (ms) - 8955.07 +\- 24.34;
```
Our model achieves latency of `8.9s` for `128` tokens or `69ms/token`.
## 3. Optimize GPT-J for GPU using DeepSpeeds `InferenceEngine`
The next and most important step is to optimize our model for GPU inference. This will be done using the DeepSpeed `InferenceEngine`. The `InferenceEngine` is initialized using the `init_inference` method. The `init_inference` method expects as parameters atleast:
- `model`: The model to optimize.
- `mp_size`: The number of GPUs to use.
- `dtype`: The data type to use.
- `replace_with_kernel_inject`: Whether inject custom kernels.
You can find more information about the `init_inference` method in the [DeepSpeed documentation](https://deepspeed.readthedocs.io/en/latest/inference-init.html) or [thier inference blog](https://www.deepspeed.ai/tutorials/inference-tutorial/).
_Note: You might need to restart your kernel if you are running into a CUDA OOM error._
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
import deepspeed
# Model Repository on huggingface.co
model_id = "philschmid/gpt-j-6B-fp16-sharded"
# load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16)
# init deepspeed inference engine
ds_model = deepspeed.init_inference(
model=model, # Transformers models
mp_size=1, # Number of GPU
dtype=torch.float16, # dtype of the weights (fp16)
replace_method="auto", # Lets DS autmatically identify the layer to replace
replace_with_kernel_inject=True, # replace the model with the kernel injector
)
print(f"model is loaded on device {ds_model.module.device}")
```
We can now inspect our model graph to see that the vanilla `GPTJLayer` has been replaced with an `HFGPTJLayer`, which includes the `DeepSpeedTransformerInference` module, a custom `nn.Module` that is optimized for inference by the DeepSpeed Team.
```python
InferenceEngine(
(module): GPTJForCausalLM(
(transformer): GPTJModel(
(wte): Embedding(50400, 4096)
(drop): Dropout(p=0.0, inplace=False)
(h): ModuleList(
(0): DeepSpeedTransformerInference(
(attention): DeepSpeedSelfAttention()
(mlp): DeepSpeedMLP()
)
```
```python
from deepspeed.ops.transformer.inference import DeepSpeedTransformerInference
assert isinstance(ds_model.module.transformer.h[0], DeepSpeedTransformerInference) == True, "Model not sucessfully initalized"
```
```python
# Test model
example = "My name is Philipp and I"
input_ids = tokenizer(example,return_tensors="pt").input_ids.to(model.device)
logits = ds_model.generate(input_ids, do_sample=True, max_length=100)
tokenizer.decode(logits[0].tolist())
# 'My name is Philipp and I live in Freiburg in Germany and I have a project called Cenapen. After three months in development already it is finally finished – and it is a Linux based device / operating system on an ARM Cortex A9 processor on a Raspberry Pi.\n\nAt the moment it offers the possibility to store data locally, it can retrieve data from a local, networked or web based Sqlite database (I’m writing this tutorial while I’'
```
## 4. Evaluate the performance and speed
As the last step, we want to take a detailed look at the performance of our optimized model. Applying optimization techniques, like graph optimizations or mixed-precision, not only impact performance (latency) those also might have an impact on the accuracy of the model. So accelerating your model comes with a trade-off.
Let's test the performance (latency) of our optimized model. We will use the same generation args as for our vanilla model.
```python
payload = (
"Hello my name is Philipp. I am getting in touch with you because i didn't get a response from you. What do I need to do to get my new card which I have requested 2 weeks ago? Please help me and answer this email in the next 7 days. Best regards and have a nice weekend but it"
* 2
)
print(f'Payload sequence length is: {len(tokenizer(payload)["input_ids"])}')
# generation arguments
generation_args = dict(do_sample=False, num_beams=1, min_length=128, max_new_tokens=128)
ds_results = measure_latency(ds_model, tokenizer, payload, generation_args, ds_model.module.device)
print(f"DeepSpeed model: {ds_results[0]}")
# Payload sequence length is: 128
# DeepSpeed model: P95 latency (ms) - 6577.044982599967; Average latency (ms) - 6569.11 +\- 6.57;
```
Our Optimized DeepsPeed model achieves latency of `6.5s` for `128` tokens or `50ms/token`.
We managed to accelerate the `GPT-J-6B` model latency from `8.9s` to `6.5` for generating `128` tokens. This results into an improvement from `69ms/token` to `50ms/token` or 1.38x.
![gpt-j-latency](/static/blog/gptj-deepspeed-inference/gptj-inference-latency.png)
## Conclusion
We successfully optimized our GPT-J Transformers with DeepSpeed-inference and managed to decrease our model latency from 69ms/token to 50ms/token or 1.3x.
Those are good results results thinking of that we only needed to add 1 additional line of code, but applying the optimization was as easy as adding one additional call to `deepspeed.init_inference`.
But I have to say that this isn't a plug-and-play process you can transfer to any Transformers model, task, or dataset. Also, make sure to check if your model is compatible with DeepSpeed-Inference.
---
Thanks for reading! If you have any questions, feel free to contact me, through [Github](https://github.com/huggingface/transformers), or on the [forum](https://discuss.huggingface.co/c/optimum/59). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Document AI: Fine-tuning LayoutLM for document-understanding using Hugging Face Transformers | https://www.philschmid.de/fine-tuning-layoutlm | 2022-10-04 | [
"DocumentAI",
"HuggingFace",
"Transformers",
"LayoutLM"
] | Learn how to fine-tune LayoutLM for document-understand using Hugging Face Transformers. LayoutLM is a document image understanding and information extraction transformers. | In this blog, you will learn how to fine-tune [LayoutLM (v1)](https://huggingface.co/docs/transformers/model_doc/layoutlm) for document-understand using Hugging Face Transformers. LayoutLM is a document image understanding and information extraction transformers. LayoutLM (v1) is the only model in the LayoutLM family with an MIT-license, which allows it to be used for commercial purposes compared to other LayoutLMv2/LayoutLMv3.
We will use the [FUNSD](https://guillaumejaume.github.io/FUNSD/) dataset a collection of 199 fully annotated forms. More information for the dataset can be found at the [dataset page](https://guillaumejaume.github.io/FUNSD/).
You will learn how to:
1. [Setup Development Environment](#1-setup-development-environment)
2. [Load and prepare FUNSD dataset](#2-load-and-prepare-funsd-dataset)
3. [Fine-tune and evaluate LayoutLM](#3-fine-tune-and-evaluate-layoutlm)
4. [Run inference and parse form](#4-run-inference-and-parse-form)
Before we can start, make sure you have a [Hugging Face Account](https://huggingface.co/join) to save artifacts and experiments.
## Quick intro: LayoutLM by Microsoft Research
LayoutLM is a multimodal Transformer model for document image understanding and information extraction transformers and can be used form understanding and receipt understanding.
![layoutlm](/static/blog/fine-tuning-layoutlm/layoutlm.png)
- Paper: https://arxiv.org/pdf/1912.13318.pdf
- Official repo: https://github.com/microsoft/unilm/tree/master/layoutlm
---
Now we know how LayoutLM works, so let's get started. 🚀
_Note: This tutorial was created and run on a g4dn.xlarge AWS EC2 Instance including a NVIDIA T4._
## 1. Setup Development Environment
Our first step is to install the Hugging Face Libraries, including transformers and datasets. Running the following cell will install all the required packages.
Additinoally, we need to install an OCR-library to extract text from images. We will use [pytesseract](https://pypi.org/project/pytesseract/).
```python
# ubuntu
!sudo apt install -y tesseract-ocr
# python
!pip install pytesseract transformers datasets seqeval tensorboard
# install git-fls for pushing model and logs to the hugging face hub
!sudo apt-get install git-lfs --yes
```
This example will use the [Hugging Face Hub](https://huggingface.co/models) as a remote model versioning service. To be able to push our model to the Hub, you need to register on the [Hugging Face](https://huggingface.co/join).
If you already have an account, you can skip this step.
After you have an account, we will use the `notebook_login` util from the `huggingface_hub` package to log into our account and store our token (access key) on the disk.
```python
from huggingface_hub import notebook_login
notebook_login()
```
## 2. Load and prepare FUNSD dataset
We will use the [FUNSD](https://guillaumejaume.github.io/FUNSD/) dataset a collection of 199 fully annotated forms. The dataset is available on Hugging Face at [nielsr/funsd](https://huggingface.co/datasets/nielsr/funsd).
_Note: The LayoutLM model doesn't have a `AutoProcessor` to nice create the our input documents, but we can use the `LayoutLMv2Processor` instead._
```python
processor_id="microsoft/layoutlmv2-base-uncased"
dataset_id ="nielsr/funsd"
```
To load the `funsd` dataset, we use the `load_dataset()` method from the 🤗 Datasets library.
```python
from datasets import load_dataset
dataset = load_dataset(dataset_id)
print(f"Train dataset size: {len(dataset['train'])}")
print(f"Test dataset size: {len(dataset['test'])}")
# Train dataset size: 149
# Test dataset size: 50
```
Lets checkout an example of the dataset.
```python
from PIL import Image, ImageDraw, ImageFont
image = Image.open(dataset['train'][34]['image_path'])
image = image.convert("RGB")
image.resize((350,450))
```
![png](/static/blog/fine-tuning-layoutlm/sample.png)
We can display all our classes by inspecting the features of our dataset. Those `ner_tags` will be later used to create a user friendly output after we fine-tuned our model.
```python
labels = dataset['train'].features['ner_tags'].feature.names
print(f"Available labels: {labels}")
id2label = {v: k for v, k in enumerate(labels)}
label2id = {k: v for v, k in enumerate(labels)}
# Available labels: ['O', 'B-HEADER', 'I-HEADER', 'B-QUESTION', 'I-QUESTION', 'B-ANSWER', 'I-ANSWER']
```
To train our model we need to convert our inputs (text/image) to token IDs. This is done by a 🤗 Transformers Tokenizer and PyTesseract. If you are not sure what this means check out [chapter 6](https://huggingface.co/course/chapter6/1?fw=tf) of the Hugging Face Course.
```python
from transformers import LayoutLMv2Processor
# use LayoutLMv2 processor without ocr since the dataset already includes the ocr text
processor = LayoutLMv2Processor.from_pretrained(processor_id, apply_ocr=False)
```
Before we can process our dataset we need to define the `features` or the processed inputs, which are later based into the model. Features are a special dictionary that defines the internal structure of a dataset.
Compared to traditional NLP datasets we need to add the `bbox` feature, which is a 2D array of the bounding boxes for each token.
```python
from PIL import Image
from functools import partial
from datasets import Features, Sequence, ClassLabel, Value, Array2D
# we need to define custom features
features = Features(
{
"input_ids": Sequence(feature=Value(dtype="int64")),
"attention_mask": Sequence(Value(dtype="int64")),
"token_type_ids": Sequence(Value(dtype="int64")),
"bbox": Array2D(dtype="int64", shape=(512, 4)),
"labels": Sequence(ClassLabel(names=labels)),
}
)
# preprocess function to perpare into the correct format for the model
def process(sample, processor=None):
encoding = processor(
Image.open(sample["image_path"]).convert("RGB"),
sample["words"],
boxes=sample["bboxes"],
word_labels=sample["ner_tags"],
padding="max_length",
truncation=True,
)
del encoding["image"]
return encoding
# process the dataset and format it to pytorch
proc_dataset = dataset.map(
partial(process, processor=processor),
remove_columns=["image_path", "words", "ner_tags", "id", "bboxes"],
features=features,
).with_format("torch")
print(proc_dataset["train"].features.keys())
# dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'bbox','lables'])
```
## 3. Fine-tune and evaluate LayoutLM
After we have processed our dataset, we can start training our model. Therefore we first need to load the [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) model with the `LayoutLMForTokenClassification` class with the label mapping of our dataset.
```python
from transformers import LayoutLMForTokenClassification
# huggingface hub model id
model_id = "microsoft/layoutlm-base-uncased"
# load model with correct number of labels and mapping
model = LayoutLMForTokenClassification.from_pretrained(
model_id, num_labels=len(labels), label2id=label2id, id2label=id2label
)
```
We want to evaluate our model during training. The `Trainer` supports evaluation during training by providing a `compute_metrics`.
We are going to use `seqeval` and the `evaluate` library to evaluate the overall f1 score for all tokens.
```python
import evaluate
import numpy as np
# load seqeval metric
metric = evaluate.load("seqeval")
# labels of the model
ner_labels = list(model.config.id2label.values())
def compute_metrics(p):
predictions, labels = p
predictions = np.argmax(predictions, axis=2)
all_predictions = []
all_labels = []
for prediction, label in zip(predictions, labels):
for predicted_idx, label_idx in zip(prediction, label):
if label_idx == -100:
continue
all_predictions.append(ner_labels[predicted_idx])
all_labels.append(ner_labels[label_idx])
return metric.compute(predictions=[all_predictions], references=[all_labels])
```
The last step is to define the hyperparameters (`TrainingArguments`) we want to use for our training. We are leveraging the [Hugging Face Hub](https://huggingface.co/models) integration of the `Trainer` to automatically push our checkpoints, logs and metrics during training into a repository.
```python
from huggingface_hub import HfFolder
from transformers import Trainer, TrainingArguments
# hugging face parameter
repository_id = "layoutlm-funsd"
# Define training args
training_args = TrainingArguments(
output_dir=repository_id,
num_train_epochs=15,
per_device_train_batch_size=16,
per_device_eval_batch_size=8,
fp16=True,
learning_rate=3e-5,
# logging & evaluation strategies
logging_dir=f"{repository_id}/logs",
logging_strategy="epoch",
evaluation_strategy="epoch",
save_strategy="epoch",
save_total_limit=2,
load_best_model_at_end=True,
metric_for_best_model="overall_f1",
# push to hub parameters
report_to="tensorboard",
push_to_hub=True,
hub_strategy="every_save",
hub_model_id=repository_id,
hub_token=HfFolder.get_token(),
)
# Create Trainer instance
trainer = Trainer(
model=model,
args=training_args,
train_dataset=proc_dataset["train"],
eval_dataset=proc_dataset["test"],
compute_metrics=compute_metrics,
)
```
We can start our training by using the `train` method of the `Trainer`.
```python
# Start training
trainer.train()
```
Nice, we have trained our model. 🎉 The best score we achieved is an overall f1 score of `0.787`.
![layout_training](/static/blog/fine-tuning-layoutlm/layout_training.png)
After our training is done we also want to save our processor to the Hugging Face Hub and create a model card.
```python
# change apply_ocr to True to use the ocr text for inference
processor.feature_extractor.apply_ocr = True
# Save processor and create model card
processor.save_pretrained(repository_id)
trainer.create_model_card()
trainer.push_to_hub()
```
## 4. Run inference and parse form
Now we have a trained model, we can use it to run inference. We will create a function that takes a document image and returns the extracted text and the bounding boxes.
```python
from transformers import LayoutLMForTokenClassification, LayoutLMv2Processor
from PIL import Image, ImageDraw, ImageFont
import torch
# load model and processor from huggingface hub
model = LayoutLMForTokenClassification.from_pretrained("philschmid/layoutlm-funsd")
processor = LayoutLMv2Processor.from_pretrained("philschmid/layoutlm-funsd")
# helper function to unnormalize bboxes for drawing onto the image
def unnormalize_box(bbox, width, height):
return [
width * (bbox[0] / 1000),
height * (bbox[1] / 1000),
width * (bbox[2] / 1000),
height * (bbox[3] / 1000),
]
label2color = {
"B-HEADER": "blue",
"B-QUESTION": "red",
"B-ANSWER": "green",
"I-HEADER": "blue",
"I-QUESTION": "red",
"I-ANSWER": "green",
}
# draw results onto the image
def draw_boxes(image, boxes, predictions):
width, height = image.size
normalizes_boxes = [unnormalize_box(box, width, height) for box in boxes]
# draw predictions over the image
draw = ImageDraw.Draw(image)
font = ImageFont.load_default()
for prediction, box in zip(predictions, normalizes_boxes):
if prediction == "O":
continue
draw.rectangle(box, outline="black")
draw.rectangle(box, outline=label2color[prediction])
draw.text((box[0] + 10, box[1] - 10), text=prediction, fill=label2color[prediction], font=font)
return image
# run inference
def run_inference(path, model=model, processor=processor, output_image=True):
# create model input
image = Image.open(path).convert("RGB")
encoding = processor(image, return_tensors="pt")
del encoding["image"]
# run inference
outputs = model(**encoding)
predictions = outputs.logits.argmax(-1).squeeze().tolist()
# get labels
labels = [model.config.id2label[prediction] for prediction in predictions]
if output_image:
return draw_boxes(image, encoding["bbox"][0], labels)
else:
return labels
run_inference(dataset["test"][34]["image_path"])
```
![png](/static/blog/fine-tuning-layoutlm/result.png)
## Conclusion
We managed to successfully fine-tune our LayoutLM to extract information from forms. With only `149` training examples we achieved an overall f1 score of `0.787`, which is impressive and another prove for the power of transfer learning.
Now its your time to integrate LayoutLM into your own projects. 🚀
---
Thanks for reading! If you have any questions, feel free to contact me, through [Github](https://github.com/huggingface/transformers), or on the [forum](https://discuss.huggingface.co/c/optimum/59). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Document AI: LiLT a better language agnostic LayoutLM model | https://www.philschmid.de/fine-tuning-lilt | 2022-11-22 | [
"DocumentAI",
"HuggingFace",
"Transformers",
"LayoutLM"
] | Learn how to fine-tune LiLt (Language independent Layout Transformer) for document-understand/document-parsing using Hugging Face Transformers. | In this blog, you will learn how to fine-tune [LiLt](https://huggingface.co/docs/transformers/v4.24.0/en/model_doc/lilt#lilt) for document-understand using Hugging Face Transformers. LiLt or **L**anguage **i**ndependent **L**ayout **T**ransformer can combine any pre-trained [RoBERTa](https://huggingface.co/models?other=roberta) text encoder with a lightweight Layout Transformer, to enable document understanding and information extraction for any language.
This means you can use non-English RoBERTa checkpoints, e.g. [bertin-project/bertin-roberta-base-spanish](https://huggingface.co/bertin-project/bertin-roberta-base-spanish) for document understanding tasks. To convert a RoBERTa checkpoint to a LiLT checkpoint, you can follow [this guide](https://github.com/jpWang/LiLT#or-generate-your-own-checkpoint-optional).
[LiLt](https://huggingface.co/docs/transformers/v4.24.0/en/model_doc/lilt#lilt) is released with an MIT license and is available on the [Hugging Face Hub](https://huggingface.co/models?other=lilt).
In this example we will use the [FUNSD](https://guillaumejaume.github.io/FUNSD/) dataset a collection of 199 fully annotated forms. More information for the dataset can be found at the [dataset page](https://guillaumejaume.github.io/FUNSD/).
You will learn how to:
1. [Setup Development Environment](#1-setup-development-environment)
2. [Load and prepare FUNSD dataset](#2-load-and-prepare-funsd-dataset)
3. [Fine-tune and evaluate LiLT](#3-fine-tune-and-evaluate-lilt)
4. [Run Inference](#4-run-inference)
Before we can start, make sure you have a [Hugging Face Account](https://huggingface.co/join) to save artifacts and experiments.
## Quick intro: LiLT Language-independent Layout Transformer
LiLT is a language independent Transformer model for document image understanding and information extraction transformers and can be used form understanding and receipt understanding. LiLT can be pretrained on the structured documents of a single language and then directly fine-tuned on other languages with the corresponding offthe-shelf monolingual/multilingual pre-trained textual models.
![lilt](/static/blog/fine-tuning-lilt/lilt.png)
- Paper: https://arxiv.org/abs/2202.13669
- Official repo: https://github.com/jpwang/lilt
---
Now we know how LiLT works, let's get started. 🚀
_Note: This tutorial was created and run on a g4dn.xlarge AWS EC2 Instance including a NVIDIA T4._
## 1. Setup Development Environment
Our first step is to install the Hugging Face Libraries, including transformers and datasets. Running the following cell will install all the required packages.
Additinoally, we need to install an OCR-library to extract text from images. We will use [pytesseract](https://pypi.org/project/pytesseract/).
```python
# ubuntu
!sudo apt install -y tesseract-ocr
# python
!pip install pytesseract transformers datasets seqeval tensorboard evaluate --upgrade
```
```python
# install git-fls for pushing model and logs to the hugging face hub
!sudo apt-get install git-lfs --yes
```
This example will use the [Hugging Face Hub](https://huggingface.co/models) as a remote model versioning service. To be able to push our model to the Hub, you need to register on the [Hugging Face](https://huggingface.co/join).
If you already have an account, you can skip this step.
After you have an account, we will use the `notebook_login` util from the `huggingface_hub` package to log into our account and store our token (access key) on the disk.
```python
from huggingface_hub import notebook_login
notebook_login()
```
## 2. Load and prepare FUNSD dataset
We will use the [FUNSD](https://guillaumejaume.github.io/FUNSD/) dataset a collection of 199 fully annotated forms. The dataset is available on Hugging Face at [nielsr/funsd](https://huggingface.co/datasets/nielsr/funsd) and [nielsr/funsd-layoutlmv3](https://huggingface.co/datasets/nielsr/funsd-layoutlmv3). We will use the `nielsr/funsd-layoutlmv3`, which includes segment positions, which will help in boosting the performance (as shown in [this paper](https://arxiv.org/abs/2105.11210)).
```python
#dataset_id ="nielsr/funsd"
dataset_id ="nielsr/funsd-layoutlmv3"
```
To load the `funsd` dataset, we use the `load_dataset()` method from the 🤗 Datasets library.
```python
from datasets import load_dataset
dataset = load_dataset(dataset_id)
print(f"Train dataset size: {len(dataset['train'])}")
print(f"Test dataset size: {len(dataset['test'])}")
# Train dataset size: 149
# Test dataset size: 50
```
Lets checkout an example of the dataset.
```python
from PIL import Image, ImageDraw, ImageFont
image = dataset['train'][34]['image']
image = image.convert("RGB")
image.resize((350,450))
```
![sample](/static/blog/fine-tuning-lilt/sample.png)
We can display all our classes by inspecting the features of our dataset. Those `ner_tags` will be later used to create a user friendly output after we fine-tuned our model.
```python
labels = dataset['train'].features['ner_tags'].feature.names
print(f"Available labels: {labels}")
id2label = {v: k for v, k in enumerate(labels)}
label2id = {k: v for v, k in enumerate(labels)}
# Available labels: ['O', 'B-HEADER', 'I-HEADER', 'B-QUESTION', 'I-QUESTION', 'B-ANSWER', 'I-ANSWER']
```
To train our model we need to convert our inputs (text/image) to token IDs. This is done by a 🤗 Transformers Tokenizer and PyTesseract. If you are not sure what this means check out [chapter 6](https://huggingface.co/course/chapter6/1?fw=tf) of the Hugging Face Course.
_Note: The LiLT model doesn't have a `AutoProcessor` or `Tokenizer` to nicely create our input documents, but we can use the `LayoutLMv3Processor` or `LayoutLMv2Processor` instead._
```python
from transformers import LayoutLMv3FeatureExtractor, AutoTokenizer, LayoutLMv3Processor
model_id="SCUT-DLVCLab/lilt-roberta-en-base"
# use LayoutLMv3 processor without ocr since the dataset already includes the ocr text
feature_extractor = LayoutLMv3FeatureExtractor(apply_ocr=False) # set
tokenizer = AutoTokenizer.from_pretrained(model_id)
# cannot use from_pretrained since the processor is not saved in the base model
processor = LayoutLMv3Processor(feature_extractor, tokenizer)
```
Before we can process our dataset we need to define the `features` or the processed inputs, which are later based into the model. Features are a special dictionary that defines the internal structure of a dataset.
Compared to traditional NLP datasets we need to add the `bbox` feature, which is a 2D array of the bounding boxes for each token.
```python
from PIL import Image
from functools import partial
from datasets import Features, Sequence, ClassLabel, Value, Array2D
# we need to define custom features
features = Features(
{
"input_ids": Sequence(feature=Value(dtype="int64")),
"attention_mask": Sequence(feature=Value(dtype="int64")),
"bbox": Array2D(dtype="int64", shape=(512, 4)),
"labels": Sequence(ClassLabel(names=labels)),
}
)
# preprocess function to perpare into the correct format for the model
def process(sample, processor=None):
encoding = processor(
sample["image"].convert("RGB"),
sample["tokens"],
boxes=sample["bboxes"],
word_labels=sample["ner_tags"],
padding="max_length",
truncation=True,
)
# remove pixel values not needed for LiLT
del encoding["pixel_values"]
return encoding
# process the dataset and format it to pytorch
proc_dataset = dataset.map(
partial(process, processor=processor),
remove_columns=["image", "tokens", "ner_tags", "id", "bboxes"],
features=features,
).with_format("torch")
print(proc_dataset["train"].features.keys())
# dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'bbox','lables'])
```
## 3. Fine-tune and evaluate LiLT
After we have processed our dataset, we can start training our model. Therefore we first need to load the [SCUT-DLVCLab/lilt-roberta-en-base](SCUT-DLVCLab/lilt-roberta-en-base) model, which is based on a English RoBERTa model with the `LiltForTokenClassification` class with the label mapping of our dataset.
```python
from transformers import LiltForTokenClassification
# huggingface hub model id
model_id = "SCUT-DLVCLab/lilt-roberta-en-base"
# load model with correct number of labels and mapping
model = LiltForTokenClassification.from_pretrained(
model_id, num_labels=len(labels), label2id=label2id, id2label=id2label
)
```
We want to evaluate our model during training. The `Trainer` supports evaluation during training by providing a `compute_metrics`.
We are going to use `seqeval` and the `evaluate` library to evaluate the overall f1 score for all tokens.
```python
import evaluate
import numpy as np
# load seqeval metric
metric = evaluate.load("seqeval")
# labels of the model
ner_labels = list(model.config.id2label.values())
def compute_metrics(p):
predictions, labels = p
predictions = np.argmax(predictions, axis=2)
all_predictions = []
all_labels = []
for prediction, label in zip(predictions, labels):
for predicted_idx, label_idx in zip(prediction, label):
if label_idx == -100:
continue
all_predictions.append(ner_labels[predicted_idx])
all_labels.append(ner_labels[label_idx])
return metric.compute(predictions=[all_predictions], references=[all_labels])
```
The last step is to define the hyperparameters (`TrainingArguments`) we want to use for our training. We are leveraging the [Hugging Face Hub](https://huggingface.co/models) integration of the `Trainer` to automatically push our checkpoints, logs and metrics during training into a repository.
```python
from huggingface_hub import HfFolder
from transformers import Trainer, TrainingArguments
# hugging face parameter
repository_id = "lilt-en-funsd"
# Define training args
training_args = TrainingArguments(
output_dir=repository_id,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
fp16=True,
learning_rate=5e-5,
max_steps=2500,
# logging & evaluation strategies
logging_dir=f"{repository_id}/logs",
logging_strategy="steps",
logging_steps=200,
evaluation_strategy="steps",
save_strategy="steps",
save_steps=200,
save_total_limit=2,
load_best_model_at_end=True,
metric_for_best_model="overall_f1",
# push to hub parameters
report_to="tensorboard",
push_to_hub=True,
hub_strategy="every_save",
hub_model_id=repository_id,
hub_token=HfFolder.get_token(),
)
# Create Trainer instance
trainer = Trainer(
model=model,
args=training_args,
train_dataset=proc_dataset["train"],
eval_dataset=proc_dataset["test"],
compute_metrics=compute_metrics,
)
```
We can start our training by using the `train` method of the `Trainer`.
```python
# Start training
trainer.train()
```
![lilt_training](/static/blog/fine-tuning-lilt/lilt_training.png)
Nice, we have trained our model. 🎉 Lets run evaluate the best model again on the test set.
```python
trainer.evaluate()
```
The best score we achieved is an overall f1 score of `0.89`. For comparison `LayoutLM` (v1) achieves an overall f1 score of `0.79`, thats `12.66%` improvement.
Lets save our results and processor to the Hugging Face Hub and create a model card.
```python
# change apply_ocr to True to use the ocr text for inference
processor.feature_extractor.apply_ocr = True
# Save processor and create model card
processor.save_pretrained(repository_id)
trainer.create_model_card()
trainer.push_to_hub()
```
## 4. Run Inference
Now we have a trained model, we can use it to run inference. We will create a function that takes a document image and returns the extracted text and the bounding boxes.
```python
from transformers import LiltForTokenClassification, LayoutLMv3Processor
from PIL import Image, ImageDraw, ImageFont
import torch
# load model and processor from huggingface hub
model = LiltForTokenClassification.from_pretrained("philschmid/lilt-en-funsd")
processor = LayoutLMv3Processor.from_pretrained("philschmid/lilt-en-funsd")
# helper function to unnormalize bboxes for drawing onto the image
def unnormalize_box(bbox, width, height):
return [
width * (bbox[0] / 1000),
height * (bbox[1] / 1000),
width * (bbox[2] / 1000),
height * (bbox[3] / 1000),
]
label2color = {
"B-HEADER": "blue",
"B-QUESTION": "red",
"B-ANSWER": "green",
"I-HEADER": "blue",
"I-QUESTION": "red",
"I-ANSWER": "green",
}
# draw results onto the image
def draw_boxes(image, boxes, predictions):
width, height = image.size
normalizes_boxes = [unnormalize_box(box, width, height) for box in boxes]
# draw predictions over the image
draw = ImageDraw.Draw(image)
font = ImageFont.load_default()
for prediction, box in zip(predictions, normalizes_boxes):
if prediction == "O":
continue
draw.rectangle(box, outline="black")
draw.rectangle(box, outline=label2color[prediction])
draw.text((box[0] + 10, box[1] - 10), text=prediction, fill=label2color[prediction], font=font)
return image
# run inference
def run_inference(image, model=model, processor=processor, output_image=True):
# create model input
encoding = processor(image, return_tensors="pt")
del encoding["pixel_values"]
# run inference
outputs = model(**encoding)
predictions = outputs.logits.argmax(-1).squeeze().tolist()
# get labels
labels = [model.config.id2label[prediction] for prediction in predictions]
if output_image:
return draw_boxes(image, encoding["bbox"][0], labels)
else:
return labels
run_inference(dataset["test"][34]["image"])
```
![result](/static/blog/fine-tuning-lilt/result.png)
## Conclusion
We managed to successfully fine-tune our LiLT model to extract information from forms. With only `149` training examples we achieved an overall f1 score of `0.89`, which is `12.66%` better than the original `LayoutLM` model (`0.79`).
Additionally can LiLT be easily adapted to other languages, which makes it a great model for multilingual document understanding.
Now its your time to integrate Transformers into your own projects. 🚀
---
Thanks for reading. If you have any questions, contact me via [email](mailto:schmidphlilipp1995@gmail.com). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Multi-Model GPU Inference with Hugging Face Inference Endpoints | https://www.philschmid.de/multi-model-inference-endpoints | 2022-11-17 | [
"Inference",
"HuggingFace",
"BERT",
"MultiModel"
] | Learn how to deploy a multiple models on to a GPU with Hugging Face multi-model inference endpoints. | Multi-model inference endpoints provide a way to deploy multiple models onto the same infrastructure for a scalable and cost-effective inference. Multi-model inference endpoints load a list of models into memory, either CPU or GPU, and dynamically use them during inference.
This blog will cover how to create a multi-model inference endpoint using 5 models on a single GPU and how to use it in your applications.
You will learn how to:
1. [Create a multi-model `EndpointHandler` class](#create-a-multi-model-endpointhandler-class)
2. [Deploy the multi-model inference endpoints](#deploy-the-multi-model-inference-endpoints)
3. [Send requests to different models](#send-requests-to-different-models)
The following diagram shows how multi-model inference endpoints look.
![Multi Model Inference endpoints.png](/static/blog/multi-model-inference-endpoints/mmie.png)
## What are Hugging Face Inference Endpoints?
[🤗 Inference Endpoints](https://huggingface.co/inference-endpoints) offer a secure production solution to easily deploy Machine Learning models on dedicated and autoscaling infrastructure managed by Hugging Face.
A Hugging Face Inference Endpoint is built from a [Hugging Face Model Repository](https://huggingface.co/models). It supports all the [Transformers and Sentence-Transformers tasks](https://huggingface.co/docs/inference-endpoints/supported_tasks) and any arbitrary ML Framework through easy customization by adding a [custom inference handler.](https://huggingface.co/docs/inference-endpoints/guides/custom_handler) This [custom inference handler](https://huggingface.co/docs/inference-endpoints/guides/custom_handler) can be used to implement simple inference pipelines for ML Frameworks like Keras, Tensorflow, and scit-kit learn, create multi-model endpoints, or can be used to add custom business logic to your existing transformers pipeline.
## 1. Create a multi-model `EndpointHandler` class
The first step is to create a new Hugging Face Repository with our multi-model `EndpointHandler` class. In this example, we dynamically load our models in the `EndpointHandler` on endpoint creation. Alternatively, you could add the model weights into the same repository and load them from the disk.
This means our Hugging Face Repository contains a `handler.py` with our `EndpointHandler`.
We create a new repository at [https://huggingface.co/new](https://huggingface.co/new).
![create-repository](/static/blog/multi-model-inference-endpoints/create-repository.png)
Then we create a `handler.py` with the `EndpointHandler` class. If you are unfamiliar with custom handlers on Inference Endpoints, you can check out [Custom Inference with Hugging Face Inference Endpoints](https://www.philschmid.de/custom-inference-handler) or read the [documentation](https://huggingface.co/docs/inference-endpoints/guides/custom_handler).
An example of a multi-model `EndpointHandler` is shown below. This handler loads 5 different models using the Transformers `pipeline`.
```python
# handler.py
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline
# check for GPU
device = 0 if torch.cuda.is_available() else -1
# multi-model list
multi_model_list = [
{"model_id": "distilbert-base-uncased-finetuned-sst-2-english", "task": "text-classification"},
{"model_id": "Helsinki-NLP/opus-mt-en-de", "task": "translation"},
{"model_id": "facebook/bart-large-cnn", "task": "summarization"},
{"model_id": "dslim/bert-base-NER", "task": "token-classification"},
{"model_id": "textattack/bert-base-uncased-ag-news", "task": "text-classification"},
]
class EndpointHandler():
def __init__(self, path=""):
self.multi_model={}
# load all the models onto device
for model in multi_model_list:
self.multi_model[model["model_id"]] = pipeline(model["task"], model=model["model_id"], device=device)
def __call__(self, data):
# deserialize incomin request
inputs = data.pop("inputs", data)
parameters = data.pop("parameters", None)
model_id = data.pop("model_id", None)
# check if model_id is in the list of models
if model_id is None or model_id not in self.multi_model:
raise ValueError(f"model_id: {model_id} is not valid. Available models are: {list(self.multi_model.keys())}")
# pass inputs with all kwargs in data
if parameters is not None:
prediction = self.multi_model[model_id](inputs, **parameters)
else:
prediction = self.multi_model[model_id](inputs)
# postprocess the prediction
return prediction
```
The most important section in our handler is the `multi_model_list`, a list of dictionaries including our Hugging Face Model Ids and the task for the models.
You can customize the list to the models you want to use for your multi-model inference endpoint. In this example, we will use the following:
- `DistilBERT` model for `sentiment-analysis`
- `Marian` model `translation`
- `BART` model for `summarization`
- `BERT` model for `token-classification`
- `BERT` model for `text-classification`
All those models will be loaded at the endpoint creation and then used dynamically during inference by providing a `model_id` attribute in the request.
_Note: The number of models is limited by the amount of GPU memory your instance has. The bigger the instance, the more models you can load._
As the last step, we add/upload our `handler.py` to our repository, this can be done through the UI in the “Files and versions” tab.
![upload-handler](/static/blog/multi-model-inference-endpoints/upload-handler.png)
# 2. Deploy the multi-model inference endpoints
The next step is to deploy our multi-model inference endpoint. We can use the “deploy” button, which appeared after we added our `handler.py`. This will directly link us to the [Inference Endpoints UI](https://ui.endpoints.huggingface.co/) with our repository pre-select.
![deploy-modal](/static/blog/multi-model-inference-endpoints/deploy-modal.png)
We change the Instance Type to “GPU [small]” to use an NVIDIA T4 and then click “Create Endpoint”
![create-endpoint](/static/blog/multi-model-inference-endpoints/create-endpoint.png)
After a few minutes, our Inference Endpoint will be up and running. We can also check the logs to see the download of our five models.
## 3. Send requests to different models
After our Endpoint is “running” we can send requests to our different models. This can be done through the UI, with the inference widget, or programmatically using HTTP requests.
Don’t forget! We must add the `model_id` parameter, in addition to our regular `inputs`, which defines the model we want to use. You can find example payloads for all tasks in the [documentation](https://huggingface.co/docs/inference-endpoints/supported_tasks#example-request-payloads).
To send a request to our `DistilBERT` model, we use the following JSON payload.
```json
{
"inputs": "It is so cool that I can use multi-models in the same endpoint.",
"model_id": "distilbert-base-uncased-finetuned-sst-2-english"
}
```
![run-inference](/static/blog/multi-model-inference-endpoints/run-inference.png)
To send programmatic requests we can for example use Python and the `requests` library.
To send a request to our `BART` model to summarize some text, we can use the following Python snippet.
```python
import json
import requests as r
ENDPOINT_URL = "" # url of your endpoint
HF_TOKEN = "" # token of the account you deployed
# define model and payload
model_id = "facebook/bart-large-cnn"
text = "The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct."
request_body = {"inputs": text, "model_id": model_id}
# HTTP headers for authorization
headers= {
"Authorization": f"Bearer {HF_TOKEN}",
"Content-Type": "application/json"
}
# send request
response = r.post(ENDPOINT_URL, headers=headers, json=request_body)
prediction = response.json()
# [{'summary_text': 'The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world.'}]
```
## Conclusion
Now you know how to deploy a multi-model inference endpoint and how it can help you reduce your costs but still benefit from GPU inference.
As of today, multi-model endpoints are “single” threaded (1 worker), which means your endpoint processes all requests in sequence. By having multiple models in the same endpoint, you might have lower throughput depending on your traffic.
Further improvements and customization you could make are:
- Save the model weights into our multi-model-endpoint repository instead of loading them on startup time.
- Customize model inference by adding `EndpointHandler` for each model and use them rather than the `pipeline`.
As you can see, multi-model inference endpoints can be adjusted and customized to our needs. But you should still watch your request pattern and the load of models to identify if single model endpoints for high-traffic models make sense.
---
Thanks for reading. If you have any questions, contact me via **[email](mailto:philipp@huggingface.co)** or **[forum](https://discuss.huggingface.co/c/inference-endpoints/64)**. You can also connect with me on **[Twitter](https://twitter.com/_philschmid)** or **[LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/)**. |
Distributed Training: Train BART/T5 for Summarization using 🤗 Transformers and Amazon SageMaker | https://www.philschmid.de/sagemaker-distributed-training | 2021-04-09 | [
"AWS",
"HuggingFace",
"BART"
] | Learn how to train distributed models for summarization using Hugging Face Transformers and Amazon SageMaker and upload them afterwards to huggingface.co. | In case you missed it: on March 25th [we announced a collaboration with Amazon SageMaker](https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face) to make it easier to create State-of-the-Art Machine Learning models, and ship cutting-edge NLP features faster.
Together with the SageMaker team, we built 🤗 Transformers optimized [Deep Learning Containers](https://github.com/aws/deep-learning-containers/blob/master/available_images.md#huggingface-training-containers) to accelerate training of Transformers-based models. Thanks AWS friends!🤗 🚀
With the new HuggingFace estimator in the [SageMaker Python SDK](https://sagemaker.readthedocs.io/en/stable/), you can start training with a single line of code.
![thumbnail](/static/blog/sagemaker-distributed-training/thumbnail.png)
The [announcement blog post](https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face) provides all the information you need to know about the integration, including a "Getting Started" example and links to documentation, examples, and features.
listed again here:
- [🤗 Transformers Documentation: Amazon SageMaker](https://huggingface.co/transformers/sagemaker.html)
- [Example Notebooks](https://github.com/huggingface/notebooks/tree/master/sagemaker)
- [Amazon SageMaker documentation for Hugging Face](https://docs.aws.amazon.com/sagemaker/latest/dg/hugging-face.html)
- [Python SDK SageMaker documentation for Hugging Face](https://sagemaker.readthedocs.io/en/stable/frameworks/huggingface/index.html)
- [Deep Learning Container](https://github.com/aws/deep-learning-containers/blob/master/available_images.md#huggingface-training-containers)
If you're not familiar with Amazon SageMaker: _"Amazon SageMaker is a fully managed service that provides every developer and data scientist with the ability to build, train, and deploy machine learning (ML) models quickly. SageMaker removes the heavy lifting from each step of the machine learning process to make it easier to develop high quality models." [[REF](https://aws.amazon.com/sagemaker/faqs/)]_
---
## Tutorial
We will use the new [Hugging Face DLCs](https://github.com/aws/deep-learning-containers/tree/master/huggingface) and [Amazon SageMaker extension](https://sagemaker.readthedocs.io/en/stable/frameworks/huggingface/sagemaker.huggingface.html#huggingface-estimator) to train a distributed Seq2Seq-transformer model on the `summarization` task using the `transformers` and `datasets` libraries, and then upload the model to [huggingface.co](http://huggingface.co) and test it.
As [distributed training strategy](https://huggingface.co/transformers/sagemaker.html#distributed-training-data-parallel) we are going to use [SageMaker Data Parallelism](https://aws.amazon.com/blogs/aws/managed-data-parallelism-in-amazon-sagemaker-simplifies-training-on-large-datasets/), which has been built into the [Trainer](https://huggingface.co/transformers/main_classes/trainer.html) API. To use data-parallelism we only have to define the `distribution` parameter in our `HuggingFace` estimator.
```python
# configuration for running training on smdistributed Data Parallel
distribution = {'smdistributed':{'dataparallel':{ 'enabled': True }}}
```
In this tutorial, we will use an Amazon SageMaker Notebook Instance for running our training job. You can learn [here how to set up a Notebook Instance](https://docs.aws.amazon.com/sagemaker/latest/dg/nbi.html).
**What are we going to do:**
- Set up a development environment and install sagemaker
- Choose 🤗 Transformers `examples/` script
- Configure distributed training and hyperparameters
- Create a `HuggingFace` estimator and start training
- Upload the fine-tuned model to [huggingface.co](http://huggingface.co)
- Test inference
## Model and Dataset
We are going to fine-tune [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the [samsum](https://huggingface.co/datasets/samsum) dataset. _"BART is sequence-to-sequence model trained with denoising as pretraining objective."_ [[REF](https://github.com/pytorch/fairseq/blob/master/examples/bart/README.md)]
The `samsum` dataset contains about 16k messenger-like conversations with summaries.
```json
{
"id": "13818513",
"summary": "Amanda baked cookies and will bring Jerry some tomorrow.",
"dialogue": "Amanda: I baked cookies. Do you want some?\r\nJerry: Sure!\r\nAmanda: I'll bring you tomorrow :-)"
}
```
---
## Set up a development environment and install sagemaker
After our SageMaker Notebook Instance is running we can select either Jupyer Notebook or JupyterLab and create a new Notebook with the `conda_pytorch_p36 kernel`.
_**Note:** The use of Jupyter is optional: We could also launch SageMaker Training jobs from anywhere we have an SDK installed, connectivity to the cloud and appropriate permissions, such as a Laptop, another IDE or a task scheduler like Airflow or AWS Step Functions._
After that we can install the required dependencies
```bash
!pip install transformers "datasets[s3]" sagemaker --upgrade
```
[install](https://github.com/git-lfs/git-lfs/wiki/Installation) `git-lfs` for model upload.
```bash
!curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.rpm.sh | sudo bash
!sudo yum install git-lfs -y
!git lfs install
```
To run training on SageMaker we need to create a sagemaker Session and provide an IAM role with the right permission. This IAM role will be later attached to the `TrainingJob` enabling it to download data, e.g. from Amazon S3.
```python
import sagemaker
sess = sagemaker.Session()
role = sagemaker.get_execution_role()
print(f"IAM role arn used for running training: {role}")
print(f"S3 bucket used for storing artifacts: {sess.default_bucket()}")
```
---
## Choose 🤗 Transformers `examples/` script
The [🤗 Transformers repository](https://github.com/huggingface/transformers/tree/master/examples) contains several `examples/`scripts for fine-tuning models on tasks from `language-modeling` to `token-classification`. In our case, we are using the `run_summarization.py` from the `seq2seq/` examples.
**\*Note**: you can use this tutorial as-is to train your model on a different examples script.\*
Since the `HuggingFace` Estimator has git support built-in, we can specify a [training script stored in a GitHub repository](https://sagemaker.readthedocs.io/en/stable/overview.html#use-scripts-stored-in-a-git-repository) as `entry_point` and `source_dir`.
We are going to use the `transformers 4.4.2` DLC which means we need to configure the `v4.4.2` as the branch to pull the compatible example scripts.
```python
#git_config = {'repo': 'https://github.com/huggingface/transformers.git','branch': 'v4.4.2'} # v4.4.2 is referring to the `transformers_version you use in the estimator.
# used due an missing package in v4.4.2
git_config = {'repo': 'https://github.com/philschmid/transformers.git','branch': 'master'} # v4.4.2 is referring to the `transformers_version you use in the estimator.
```
---
## Configure distributed training and hyperparameters
Next, we will define our `hyperparameters` and configure our distributed training strategy. As hyperparameter, we can define any [Seq2SeqTrainingArguments](https://huggingface.co/transformers/main_classes/trainer.html#seq2seqtrainingarguments) and the ones defined in [run_summarization.py](https://github.com/huggingface/transformers/tree/master/examples/seq2seq#sequence-to-sequence-training-and-evaluation).
```python
# hyperparameters, which are passed into the training job
hyperparameters={
'per_device_train_batch_size': 4,
'per_device_eval_batch_size': 4,
'model_name_or_path':'facebook/bart-large-cnn',
'dataset_name':'samsum',
'do_train':True,
'do_predict': True,
'predict_with_generate': True,
'output_dir':'/opt/ml/model',
'num_train_epochs': 3,
'learning_rate': 5e-5,
'seed': 7,
'fp16': True,
}
# configuration for running training on smdistributed Data Parallel
distribution = {'smdistributed':{'dataparallel':{ 'enabled': True }}}
```
Since, we are using [SageMaker Data Parallelism](https://aws.amazon.com/blogs/aws/managed-data-parallelism-in-amazon-sagemaker-simplifies-training-on-large-datasets/) our `total_batch_size` will be `per_device_train_batch_size` \* `n_gpus`.
---
## Create a `HuggingFace` estimator and start training
The last step before training is creating a `HuggingFace` estimator. The Estimator handles the end-to-end Amazon SageMaker training. We define which fine-tuning script should be used as `entry_point`, which `instance_type` should be used, and which `hyperparameters` are passed in.
```python
from sagemaker.huggingface import HuggingFace
# create the Estimator
huggingface_estimator = HuggingFace(
entry_point='run_summarization.py', # script
source_dir='./examples/seq2seq', # relative path to example
git_config=git_config,
instance_type='ml.p3dn.24xlarge',
instance_count=2,
transformers_version='4.4.2',
pytorch_version='1.6.0',
py_version='py36',
role=role,
hyperparameters = hyperparameters,
distribution = distribution
)
```
As `instance_type` we are using `ml.p3dn.24xlarge`, which contains 8x NVIDIA A100 with an `instance_count` of 2. This means we are going to run training on 16 GPUs and a `total_batch_size` of 16\*4=64. We are going to train a 400 Million Parameter model with a `total_batch_size` of 64, which is just wow.
To start our training we call the `.fit()` method.
```python
# starting the training job
huggingface_estimator.fit()
```
```bash
2021-04-01 13:00:35 Starting - Starting the training job...
2021-04-01 13:01:03 Starting - Launching requested ML instancesProfilerReport-1617282031: InProgress
2021-04-01 13:02:23 Starting - Preparing the instances for training......
2021-04-01 13:03:25 Downloading - Downloading input data...
2021-04-01 13:04:04 Training - Downloading the training image...............
2021-04-01 13:06:33 Training - Training image download completed. Training in progress
....
....
2021-04-01 13:16:47 Uploading - Uploading generated training model
2021-04-01 13:27:49 Completed - Training job completed
Training seconds: 2882
Billable seconds: 2882
```
The training seconds are 2882 because they are multiplied by the number of instances. If we calculate 2882/2=1441 is it the duration from "Downloading the training image" to "Training job completed".
Converted to real money, our training on 16 NVIDIA Tesla V100-GPU for a State-of-the-Art summarization model comes down to ~28$.
---
## Upload the fine-tuned model to [huggingface.co](http://huggingface.co)
Since our model achieved a pretty good score we are going to upload it to [huggingface.co](http://huggingface.co), create a `model_card` and test it with the Hosted Inference widget. To upload a model you need to [create an account here](https://huggingface.co/join).
We can download our model from Amazon S3 and unzip it using the following snippet.
```python
import os
import tarfile
from sagemaker.s3 import S3Downloader
local_path = 'my_bart_model'
os.makedirs(local_path, exist_ok = True)
# download model from S3
S3Downloader.download(
s3_uri=huggingface_estimator.model_data, # s3 uri where the trained model is located
local_path=local_path, # local path where *.tar.gz will be saved
sagemaker_session=sess # sagemaker session used for training the model
)
# unzip model
tar = tarfile.open(f"{local_path}/model.tar.gz", "r:gz")
tar.extractall(path=local_path)
tar.close()
os.remove(f"{local_path}/model.tar.gz")
```
Before we are going to upload our model to [huggingface.co](http://huggingface.co) we need to create a `model_card`. The `model_card` describes the model and includes hyperparameters, results, and specifies which dataset was used for training. To create a `model_card` we create a `README.md` in our `local_path`
```python
# read eval and test results
with open(f"{local_path}/eval_results.json") as f:
eval_results_raw = json.load(f)
eval_results={}
eval_results["eval_rouge1"] = eval_results_raw["eval_rouge1"]
eval_results["eval_rouge2"] = eval_results_raw["eval_rouge2"]
eval_results["eval_rougeL"] = eval_results_raw["eval_rougeL"]
eval_results["eval_rougeLsum"] = eval_results_raw["eval_rougeLsum"]
with open(f"{local_path}/test_results.json") as f:
test_results_raw = json.load(f)
test_results={}
test_results["test_rouge1"] = test_results_raw["test_rouge1"]
test_results["test_rouge2"] = test_results_raw["test_rouge2"]
test_results["test_rougeL"] = test_results_raw["test_rougeL"]
test_results["test_rougeLsum"] = test_results_raw["test_rougeLsum"]
```
After we extract all the metrics we want to include we are going to create our `README.md`. Additionally to the automated generation of the results table we add the metrics manually to the `metadata` of our model card under `model-index`
```python
import json
MODEL_CARD_TEMPLATE = """
---
language: en
tags:
- sagemaker
- bart
- summarization
license: apache-2.0
datasets:
- samsum
model-index:
- name: {model_name}
results:
- task:
name: Abstractive Text Summarization
type: abstractive-text-summarization
dataset:
name: "SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization"
type: samsum
metrics:
- name: Validation ROGUE-1
type: rogue-1
value: 42.621
- name: Validation ROGUE-2
type: rogue-2
value: 21.9825
- name: Validation ROGUE-L
type: rogue-l
value: 33.034
- name: Test ROGUE-1
type: rogue-1
value: 41.3174
- name: Test ROGUE-2
type: rogue-2
value: 20.8716
- name: Test ROGUE-L
type: rogue-l
value: 32.1337
widget:
- text: |
Jeff: Can I train a 🤗 Transformers model on Amazon SageMaker?
Philipp: Sure you can use the new Hugging Face Deep Learning Container.
Jeff: ok.
Jeff: and how can I get started?
Jeff: where can I find documentation?
Philipp: ok, ok you can find everything here. https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face
---
## `{model_name}`
This model was trained using Amazon SageMaker and the new Hugging Face Deep Learning container.
For more information look at:
- [🤗 Transformers Documentation: Amazon SageMaker](https://huggingface.co/transformers/sagemaker.html)
- [Example Notebooks](https://github.com/huggingface/notebooks/tree/master/sagemaker)
- [Amazon SageMaker documentation for Hugging Face](https://docs.aws.amazon.com/sagemaker/latest/dg/hugging-face.html)
- [Python SDK SageMaker documentation for Hugging Face](https://sagemaker.readthedocs.io/en/stable/frameworks/huggingface/index.html)
- [Deep Learning Container](https://github.com/aws/deep-learning-containers/blob/master/available_images.md#huggingface-training-containers)
## Hyperparameters
{hyperparameters}
## Usage
from transformers import pipeline
summarizer = pipeline("summarization", model="philschmid/{model_name}")
conversation = '''Jeff: Can I train a 🤗 Transformers model on Amazon SageMaker?
Philipp: Sure you can use the new Hugging Face Deep Learning Container.
Jeff: ok.
Jeff: and how can I get started?
Jeff: where can I find documentation?
Philipp: ok, ok you can find everything here. https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face
'''
nlp(conversation)
## Results
| key | value |
| --- | ----- |
{eval_table}
{test_table}
"""
# Generate model card (todo: add more data from Trainer)
model_card = MODEL_CARD_TEMPLATE.format(
model_name=f"{hyperparameters['model_name_or_path'].split('/')[1]}-{hyperparameters['dataset_name']}",
hyperparameters=json.dumps(hyperparameters, indent=4, sort_keys=True),
eval_table="\n".join(f"| {k} | {v} |" for k, v in eval_results.items()),
test_table="\n".join(f"| {k} | {v} |" for k, v in test_results.items()),
)
with open(f"{local_path}/README.md", "w") as f:
f.write(model_card)
```
After we have our unzipped model and model card located in `my_bart_model` we can use the either `huggingface_hub` SDK to create a repository and upload it to [huggingface.co](https://huggingface.co) – or just to https://huggingface.co/new an create a new repository and upload it.
```python
from getpass import getpass
from huggingface_hub import HfApi, Repository
hf_username = "philschmid" # your username on huggingface.co
hf_email = "philipp@huggingface.co" # email used for commit
repository_name = f"{hyperparameters['model_name_or_path'].split('/')[1]}-{hyperparameters['dataset_name']}" # repository name on huggingface.co
password = getpass("Enter your password:") # creates a prompt for entering password
# get hf token
token = HfApi().login(username=hf_username, password=password)
# create repository
repo_url = HfApi().create_repo(token=token, name=repository_name, exist_ok=True)
# create a Repository instance
model_repo = Repository(use_auth_token=token,
clone_from=repo_url,
local_dir=local_path,
git_user=hf_username,
git_email=hf_email)
# push model to the hub
model_repo.push_to_hub()
```
---
## Test inference
After we uploaded our model we can access it at `https://huggingface.co/{hf_username}/{repository_name}`
```python
print(f"https://huggingface.co/{hf_username}/{repository_name}")
```
And use the "Hosted Inference API" widget to test it.
[https://huggingface.co/philschmid/bart-large-cnn-samsum](https://huggingface.co/philschmid/bart-large-cnn-samsum)
![inference](/static/blog/sagemaker-distributed-training/inference-test.png)
---
You can find the code [here](https://github.com/huggingface/notebooks/tree/master/sagemaker/08_distributed_summarization_bart_t5). Feel free to contact us or the forum.
Thanks for reading. If you have any questions, feel free to contact me, through [Github](https://github.com/huggingface/transformers), or on the [forum](https://discuss.huggingface.co/c/sagemaker/17). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Task-specific knowledge distillation for BERT using Transformers & Amazon SageMaker | https://www.philschmid.de/knowledge-distillation-bert-transformers | 2022-02-01 | [
"HuggingFace",
"AWS",
"BERT",
"PyTorch"
] | Learn how to run apply task-specific knowledge distillation for BERT and text-classification using Hugging Face Transformers & Amazon SageMaker including Hyperparameter search. | Welcome to this end-to-end task-specific knowledge distillation Text-Classification example using Transformers, PyTorch & Amazon SageMaker. Distillation is the process of training a small "student" to mimic a larger "teacher". In this example, we will use a [BERT-base](https://huggingface.co/textattack/bert-base-uncased-SST-2) as Teacher and [BERT-Tiny](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) as Student. We will use [Text-Classification](https://huggingface.co/tasks/text-classification) as our task-specific knowledge distillation task and the [Stanford Sentiment Treebank v2 (SST-2)](https://paperswithcode.com/dataset/sst) dataset for training.
They are two different types of knowledge distillation, the Task-agnostic knowledge distillation (right) and the Task-specific knowledge distillation (left). In this example we are going to use the Task-specific knowledge distillation.
![knowledge-distillation](/static/blog/knowledge-distillation-bert-transformers/knowledge-distillation.png)
_Task-specific distillation (left) versus task-agnostic distillation (right). Figure from FastFormers by Y. Kim and H. Awadalla [arXiv:2010.13382]._
In Task-specific knowledge distillation a "second step of distillation" is used to "fine-tune" the model on a given dataset. This idea comes from the [DistilBERT paper](https://arxiv.org/pdf/1910.01108.pdf) where it was shown that a student performed better than simply finetuning the distilled language model:
> We also studied whether we could add another step of distillation during the adaptation phase by fine-tuning DistilBERT on SQuAD using a BERT model previously fine-tuned on SQuAD as a teacher for an additional term in the loss (knowledge distillation). In this setting, there are thus two successive steps of distillation, one during the pre-training phase and one during the adaptation phase. In this case, we were able to reach interesting performances given the size of the model:79.8 F1 and 70.4 EM, i.e. within 3 points of the full model.
If you are more interested in those topics you should defintely read:
- [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108)
- [FastFormers: Highly Efficient Transformer Models for Natural Language Understanding](https://arxiv.org/abs/2010.13382)
Especially the [FastFormers paper](https://arxiv.org/abs/2010.13382) contains great research on what works and doesn't work when using knowledge distillation.
---
Huge thanks to [Lewis Tunstall](https://www.linkedin.com/in/lewis-tunstall/) and his great [Weeknotes: Distilling distilled transformers](https://lewtun.github.io/blog/weeknotes/nlp/huggingface/transformers/2021/01/17/wknotes-distillation-and-generation.html#fn-1)
## Installation
```python
#%pip install "pytorch==1.10.1"
%pip install transformers datasets tensorboard --upgrade
!sudo apt-get install git-lfs
```
This example will use the [Hugging Face Hub](https://huggingface.co/models) as remote model versioning service. To be able to push our model to the Hub, you need to register on the [Hugging Face](https://huggingface.co/join).
If you already have an account you can skip this step.
After you have an account, we will use the `notebook_login` util from the `huggingface_hub` package to log into our account and store our token (access key) on the disk.
```python
from huggingface_hub import notebook_login
notebook_login()
```
## Setup & Configuration
In this step we will define global configurations and paramters, which are used across the whole end-to-end fine-tuning proccess, e.g. `teacher` and `studen` we will use.
In this example, we will use [BERT-base](textattack/bert-base-uncased-SST-2) as Teacher and [BERT-Tiny](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) as Student. Our Teacher is already fine-tuned on our dataset, which makes it easy for us to directly start the distillation training job rather than fine-tuning the teacher first to then distill it afterwards.
_**IMPORTANT**: This example will only work with a `Teacher` & `Student` combination where the Tokenizer is creating the same output._
Additionally, describes the [FastFormers: Highly Efficient Transformer Models for Natural Language Understanding](https://arxiv.org/abs/2010.13382) paper an additional phenomenon.
> In our experiments, we have observed that dis-
> tilled models do not work well when distilled to a
> different model type. Therefore, we restricted our
> setup to avoid distilling RoBERTa model to BERT
> or vice versa. The major difference between the
> two model groups is the input token (sub-word) em-
> bedding. We think that different input embedding
> spaces result in different output embedding spaces,
> and knowledge transfer with different spaces does
> not work well
```python
student_id = "google/bert_uncased_L-2_H-128_A-2"
teacher_id = "textattack/bert-base-uncased-SST-2"
# name for our repository on the hub
repo_name = "tiny-bert-sst2-distilled"
```
Below are some checks to make sure the `Teacher` & `Student` are creating the same output.
```python
from transformers import AutoTokenizer
# init tokenizer
teacher_tokenizer = AutoTokenizer.from_pretrained(teacher_id)
student_tokenizer = AutoTokenizer.from_pretrained(student_id)
# sample input
sample = "This is a basic example, with different words to test."
# assert results
assert teacher_tokenizer(sample) == student_tokenizer(sample), "Tokenizers haven't created the same output"
```
## Dataset & Pre-processing
As Dataset we will use the [Stanford Sentiment Treebank v2 (SST-2)](https://paperswithcode.com/dataset/sst) a text-classification for `sentiment-analysis`, which is included in the [GLUE benchmark](https://gluebenchmark.com/). The dataset is based on the dataset introduced by Pang and Lee (2005) and consists of 11,855 single sentences extracted from movie reviews. It was parsed with the Stanford parser and includes a total of 215,154 unique phrases from those parse trees, each annotated by 3 human judges. It uses the two-way (positive/negative) class split, with only sentence-level labels.
```python
dataset_id="glue"
dataset_config="sst2"
```
To load the `sst2` dataset, we use the `load_dataset()` method from the 🤗 Datasets library.
```python
from datasets import load_dataset
dataset = load_dataset(dataset_id,dataset_config)
```
### Pre-processing & Tokenization
To distill our model we need to convert our "Natural Language" to token IDs. This is done by a 🤗 Transformers Tokenizer which will tokenize the inputs (including converting the tokens to their corresponding IDs in the pretrained vocabulary). If you are not sure what this means check out [chapter 6](https://huggingface.co/course/chapter6/1?fw=tf) of the Hugging Face Course.
We are going to use the tokenizer of the `Teacher`, but since both are creating same output you could also go with the `Student` tokenizer.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(teacher_id)
```
Additionally we add the `truncation=True` and `max_length=512` to align the length and truncate texts that are bigger than the maximum size allowed by the model.
```python
def process(examples):
tokenized_inputs = tokenizer(
examples["sentence"], truncation=True, max_length=512
)
return tokenized_inputs
tokenized_datasets = dataset.map(process, batched=True)
tokenized_datasets = tokenized_datasets.rename_column("label","labels")
tokenized_datasets["test"].features
```
## Distilling the model using `PyTorch` and `DistillationTrainer`
Now that our `dataset` is processed, we can distill it. Normally, when fine-tuning a transformer model using PyTorch you should go with the `Trainer-API`. The [Trainer](https://huggingface.co/docs/transformers/v4.16.1/en/main_classes/trainer#transformers.Trainer) class provides an API for feature-complete training in PyTorch for most standard use cases.
In our example we cannot use the `Trainer` out-of-the-box, since we need to pass in two models, the `Teacher` and the `Student` and compute the loss for both. But we can subclass the `Trainer` to create a `DistillationTrainer` which will take care of it and only overwrite the [compute_loss](https://github.com/huggingface/transformers/blob/c4ad38e5ac69e6d96116f39df789a2369dd33c21/src/transformers/trainer.py#L1962) method as well as the `init` method. In addition to this we also need to subclass the `TrainingArguments` to include the our distillation hyperparameters.
```python
from transformers import TrainingArguments, Trainer
import torch
import torch.nn as nn
import torch.nn.functional as F
class DistillationTrainingArguments(TrainingArguments):
def __init__(self, *args, alpha=0.5, temperature=2.0, **kwargs):
super().__init__(*args, **kwargs)
self.alpha = alpha
self.temperature = temperature
class DistillationTrainer(Trainer):
def __init__(self, *args, teacher_model=None, **kwargs):
super().__init__(*args, **kwargs)
self.teacher = teacher_model
# place teacher on same device as student
self._move_model_to_device(self.teacher,self.model.device)
self.teacher.eval()
def compute_loss(self, model, inputs, return_outputs=False):
# compute student output
outputs_student = model(**inputs)
student_loss=outputs_student.loss
# compute teacher output
with torch.no_grad():
outputs_teacher = self.teacher(**inputs)
# assert size
assert outputs_student.logits.size() == outputs_teacher.logits.size()
# Soften probabilities and compute distillation loss
loss_function = nn.KLDivLoss(reduction="batchmean")
loss_logits = (loss_function(
F.log_softmax(outputs_student.logits / self.args.temperature, dim=-1),
F.softmax(outputs_teacher.logits / self.args.temperature, dim=-1)) * (self.args.temperature ** 2))
# Return weighted student loss
loss = self.args.alpha * student_loss + (1. - self.args.alpha) * loss_logits
return (loss, outputs_student) if return_outputs else loss
```
### Hyperparameter Definition, Model Loading
```python
from transformers import AutoModelForSequenceClassification, DataCollatorWithPadding
from huggingface_hub import HfFolder
# create label2id, id2label dicts for nice outputs for the model
labels = tokenized_datasets["train"].features["labels"].names
num_labels = len(labels)
label2id, id2label = dict(), dict()
for i, label in enumerate(labels):
label2id[label] = str(i)
id2label[str(i)] = label
# define training args
training_args = DistillationTrainingArguments(
output_dir=repo_name,
num_train_epochs=7,
per_device_train_batch_size=128,
per_device_eval_batch_size=128,
fp16=True,
learning_rate=6e-5,
seed=33,
# logging & evaluation strategies
logging_dir=f"{repo_name}/logs",
logging_strategy="epoch", # to get more information to TB
evaluation_strategy="epoch",
save_strategy="epoch",
save_total_limit=2,
load_best_model_at_end=True,
metric_for_best_model="accuracy",
report_to="tensorboard",
# push to hub parameters
push_to_hub=True,
hub_strategy="every_save",
hub_model_id=repo_name,
hub_token=HfFolder.get_token(),
# distilation parameters
alpha=0.5,
temperature=4.0
)
# define data_collator
data_collator = DataCollatorWithPadding(tokenizer=tokenizer)
# define model
teacher_model = AutoModelForSequenceClassification.from_pretrained(
teacher_id,
num_labels=num_labels,
id2label=id2label,
label2id=label2id,
)
# define student model
student_model = AutoModelForSequenceClassification.from_pretrained(
student_id,
num_labels=num_labels,
id2label=id2label,
label2id=label2id,
)
```
### Evaluation metric
we can create a `compute_metrics` function to evaluate our model on the test set. This function will be used during the training process to compute the `accuracy` & `f1` of our model.
```python
from datasets import load_metric
import numpy as np
# define metrics and metrics function
accuracy_metric = load_metric( "accuracy")
def compute_metrics(eval_pred):
predictions, labels = eval_pred
predictions = np.argmax(predictions, axis=1)
acc = accuracy_metric.compute(predictions=predictions, references=labels)
return {
"accuracy": acc["accuracy"],
}
```
## Training
Start training with calling `trainer.train()`
```python
trainer = DistillationTrainer(
student_model,
training_args,
teacher_model=teacher_model,
train_dataset=tokenized_datasets["train"],
eval_dataset=tokenized_datasets["validation"],
data_collator=data_collator,
tokenizer=tokenizer,
compute_metrics=compute_metrics,
)
```
start training using the `DistillationTrainer`.
```python
trainer.train()
```
## Hyperparameter Search for Distillation parameter `alpha` & `temperature` with optuna
The parameter `alpha` & `temparature` in the `DistillationTrainer` can also be used when doing Hyperparamter search to maxizime our "knowledge extraction". As Hyperparamter Optimization framework are we using [Optuna](https://optuna.org/), which has a integration into the `Trainer-API`. Since we the `DistillationTrainer` is a sublcass of the `Trainer` we can use the `hyperparameter_search` without any code changes.
```python
#%pip install optuna
```
To do Hyperparameter Optimization using `optuna` we need to define our hyperparameter space. In this example we are trying to optimize/maximize the `num_train_epochs`, `learning_rate`, `alpha` & `temperature` for our `student_model`.
```python
def hp_space(trial):
return {
"num_train_epochs": trial.suggest_int("num_train_epochs", 2, 10),
"learning_rate": trial.suggest_float("learning_rate", 1e-5, 1e-3 ,log=True),
"alpha": trial.suggest_float("alpha", 0, 1),
"temperature": trial.suggest_int("temperature", 2, 30),
}
```
To start our Hyperparmeter search we just need to call `hyperparameter_search` provide our `hp_space` and number of trials to run.
```python
def student_init():
return AutoModelForSequenceClassification.from_pretrained(
student_id,
num_labels=num_labels,
id2label=id2label,
label2id=label2id
)
trainer = DistillationTrainer(
model_init=student_init,
args=training_args,
teacher_model=teacher_model,
train_dataset=tokenized_datasets["train"],
eval_dataset=tokenized_datasets["validation"],
data_collator=data_collator,
tokenizer=tokenizer,
compute_metrics=compute_metrics,
)
best_run = trainer.hyperparameter_search(
n_trials=50,
direction="maximize",
hp_space=hp_space
)
print(best_run)
```
Since optuna is just finding the best hyperparameters we need to fine-tune our model again using the best hyperparamters from the `best_run`.
```python
# overwrite initial hyperparameters with from the best_run
for k,v in best_run.hyperparameters.items():
setattr(training_args, k, v)
# Define a new repository to store our distilled model
best_model_ckpt = "tiny-bert-best"
training_args.output_dir = best_model_ckpt
```
We have overwritten the default Hyperparameters with the one from our `best_run` and can start the training now.
```python
# Create a new Trainer with optimal parameters
optimal_trainer = DistillationTrainer(
student_model,
training_args,
teacher_model=teacher_model,
train_dataset=tokenized_datasets["train"],
eval_dataset=tokenized_datasets["validation"],
data_collator=data_collator,
tokenizer=tokenizer,
compute_metrics=compute_metrics,
)
optimal_trainer.train()
# save best model, metrics and create model card
trainer.create_model_card(model_name=training_args.hub_model_id)
trainer.push_to_hub()
```
```python
from huggingface_hub import HfApi
whoami = HfApi().whoami()
username = whoami['name']
print(f"https://huggingface.co/{username}/{repo_name}")
```
## Results & Conclusion
We were able to achieve a `accuracy` of 0.8337, which is a very good result for our model. Our distilled `Tiny-Bert` has 96% less parameters than the teacher `bert-base` and runs ~46.5x faster while preserving over 90% of BERT’s performances as measured on the SST2 dataset.
| model | Parameter | Speed-up | Accuracy |
| --------- | --------- | -------- | -------- |
| BERT-base | 109M | 1x | 93.2% |
| tiny-BERT | 4M | 46.5x | 83.4% |
_Note: The [FastFormers paper](https://arxiv.org/abs/2010.13382) uncovered that the biggest boost in performance is observerd when having 6 or more layers in the student. The [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) we used only had 2, which means when changing our student to, e.g. `distilbert-base-uncased` we should better performance in terms of accuracy._
If you are now planning to implement and add task-specific knowledge distillation to your models. I suggest to take a look at the [sagemaker-distillation](https://github.com/philschmid/knowledge-distillation-transformers-pytorch-sagemaker/blob/master/sagemaker-distillation.ipynb), which shows how to run task-specific knowledge distillation on Amazon SageMaker. For the example i created a script deriving this notebook to make it as easy as possible to use for you. You only need to define your `teacher_id`, `student_id` as well as your `dataset` config to run task-specific knowledge distillation for `text-classification`.
```python
from sagemaker.huggingface import HuggingFace
# hyperparameters, which are passed into the training job
hyperparameters={
'teacher_id':'textattack/bert-base-uncased-SST-2',
'student_id':'google/bert_uncased_L-2_H-128_A-2',
'dataset_id':'glue',
'dataset_config':'sst2',
# distillation parameter
'alpha': 0.5,
'temparature': 4,
# hpo parameter
"run_hpo": True,
"n_trials": 100,
}
# create the Estimator
huggingface_estimator = HuggingFace(..., hyperparameters=hyperparameters)
# start knwonledge distillation training
huggingface_estimator.fit()
```
In conclusion you can say that it is just incredible how easy Transformers and the `Trainer API` can be used to implement task-specific knowledge distillation. We needed to write ~20 lines of custom code deriving the `Trainer` into a `DistillationTrainer` to support task-specific knowledge distillation with leveraging all benefits of the `Trainer API` like evaluation, hyperparameter tuning, and model card creation.
In addition, we used Amazon SageMaker to easily scale our Training with out thinking to much about the infrastructure and how we iterate on our experiments. At the end we created an example, which can be used for any Text-Classification dataset and teacher & student combination for task-specific knowledge distillation.
I believe this will help companies improiving their production performance of Transformers even more by implementing task-specific knowledge distillation as one part of their MLOps pipeline.
---
You can find the code [here](https://github.com/philschmid/knowledge-distillation-transformers-pytorch-sagemaker) and feel free to open a thread on the [forum](https://discuss.huggingface.co/).
Thanks for reading. If you have any questions, feel free to contact me, through [Github](https://github.com/huggingface/transformers), or on the [forum](https://discuss.huggingface.co/c/sagemaker/17). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Stable Diffusion Inpainting example with Hugging Face inference Endpoints | https://www.philschmid.de/stable-diffusion-inpainting-inference-endpoints | 2022-12-15 | [
"Diffusion",
"Inference",
"HuggingFace",
"Generation"
] | Learn how to deploy Stable Diffusion 2.0 Inpainting on Hugging Face Inference Endpoints to manipulate images. | [Inpainting](https://en.wikipedia.org/wiki/Inpainting) refers to the process of replacing/deteriorating or filling in missing data in an artwork to complete the image. This process was commonly used in image restoration. But with the latest AI developments and breakthroughs, like [DALL-E by OpenAI](https://openai.com/dall-e-2/) or Stable Diffusion by [CompVis](https://github.com/CompVis), [Stability AI](https://stability.ai/), and [LAION](https://laion.ai/), inpainting can be used with generative models and achieve impressive results.
You can try it out yourself at the [RunwayML Stable Diffusion Inpainting Space](https://huggingface.co/spaces/runwayml/stable-diffusion-inpainting)
![Inpainting.gif](/static/blog/stable-diffusion-inpainting-inference-endpoints/Inpainting.gif)
Suppose you are now as impressed as I am. In that case, you are probably asking yourself: “ok, how can I integrate inpainting into my applications in a scalable, reliable, and secure way, how can I use it as an API?”.
That's where Hugging Face Inference Endpoints can help you! **[🤗 Inference Endpoints](https://huggingface.co/inference-endpoints)** offers a secure production solution to easily deploy Machine Learning models on dedicated and autoscaling infrastructure managed by Hugging Face.
Inference Endpoints already has support for **[text-to-image generation using Stable Diffusion](https://www.philschmid.de/stable-diffusion-inference-endpoints),** which enables you to generate Images from a text prompt. With this blog post, you will learn how to enable inpainting workflows with Inference Endpoints using the [custom handler](https://huggingface.co/docs/inference-endpoints/guides/custom_handler) feature. [Custom handlers](https://huggingface.co/docs/inference-endpoints/guides/custom_handler) allow users to modify, customize and extend the inference step of your model.
![thumbnail](/static/blog/stable-diffusion-inpainting-inference-endpoints/thumbnail.png)
Before we can get started, make sure you meet all of the following requirements:
1. An Organization/User with an active credit card. (Add billing [here](https://huggingface.co/settings/billing))
2. You can access the UI at: [https://ui.endpoints.huggingface.co](https://ui.endpoints.huggingface.co/endpoints)
The Tutorial will cover how to:
1. [Create Inpainting Inference Handler](#1-create-inpainting-inference-handler)
2. [Deploy Stable Diffusion 2 Inpainting as Inference Endpoint](#2-deploy-stable-diffusion-2-inpainting-as-inference-endpoint)
3. [Integrate Stable Diffusion Inpainting as API and send HTTP requests using Python](#3-integrate-stable-diffusion-inpainting-as-api-and-send-http-requests-using-python)
### TL;DR;
You can directly hit “deploy” on this repository to get started: [https://huggingface.co/philschmid/stable-diffusion-2-inpainting-endpoint](https://huggingface.co/philschmid/stable-diffusion-2-inpainting-endpoint)
## 1. Create Inpainting Inference Handler
This tutorial is not covering how you create the custom handler for inference. If you want to learn how to create a custom Handler for Inference Endpoints, you can either checkout the [documentation](https://huggingface.co/docs/inference-endpoints/guides/custom_handler) or go through [“Custom Inference with Hugging Face Inference Endpoints”](https://www.philschmid.de/custom-inference-handler)
We are going to deploy [philschmid/stable-diffusion-2-inpainting-endpoint](https://huggingface.co/philschmid/stable-diffusion-2-inpainting-endpoint), which implements the following `handler.py` for [stabilityai/stable-diffusion-2-inpainting](https://huggingface.co/stabilityai/stable-diffusion-2-inpainting).
```python
from typing import Dict, List, Any
import torch
from diffusers import DPMSolverMultistepScheduler, StableDiffusionInpaintPipeline
from PIL import Image
import base64
from io import BytesIO
# set device
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
if device.type != 'cuda':
raise ValueError("need to run on GPU")
class EndpointHandler():
def __init__(self, path=""):
# load StableDiffusionInpaintPipeline pipeline
self.pipe = StableDiffusionInpaintPipeline.from_pretrained(path, torch_dtype=torch.float16)
# use DPMSolverMultistepScheduler
self.pipe.scheduler = DPMSolverMultistepScheduler.from_config(self.pipe.scheduler.config)
# move to device
self.pipe = self.pipe.to(device)
def __call__(self, data: Any) -> List[List[Dict[str, float]]]:
"""
:param data: A dictionary contains `inputs` and optional `image` field.
:return: A dictionary with `image` field contains image in base64.
"""
inputs = data.pop("inputs", data)
encoded_image = data.pop("image", None)
encoded_mask_image = data.pop("mask_image", None)
# hyperparamters
num_inference_steps = data.pop("num_inference_steps", 25)
guidance_scale = data.pop("guidance_scale", 7.5)
negative_prompt = data.pop("negative_prompt", None)
height = data.pop("height", None)
width = data.pop("width", None)
# process image
if encoded_image is not None and encoded_mask_image is not None:
image = self.decode_base64_image(encoded_image)
mask_image = self.decode_base64_image(encoded_mask_image)
else:
image = None
mask_image = None
# run inference pipeline
out = self.pipe(inputs,
image=image,
mask_image=mask_image,
num_inference_steps=num_inference_steps,
guidance_scale=guidance_scale,
num_images_per_prompt=1,
negative_prompt=negative_prompt,
height=height,
width=width
)
# return first generate PIL image
return out.images[0]
# helper to decode input image
def decode_base64_image(self, image_string):
base64_image = base64.b64decode(image_string)
buffer = BytesIO(base64_image)
image = Image.open(buffer)
return image
```
## 2. Deploy Stable Diffusion 2 Inpainting as Inference Endpoint
UI: **[https://ui.endpoints.huggingface.co/](https://ui.endpoints.huggingface.co/)**
The first step is to deploy our model as an Inference Endpoint. We can deploy our custom Custom Handler the same way as a regular Inference Endpoint.
Select the repository, the cloud, and the region, adjust the instance and security settings, and deploy.
![repository](/static/blog/stable-diffusion-inpainting-inference-endpoints/repository.png)
The Inference Endpoint Service will check during the creation of your Endpoint if there is a **`handler.py`** available and valid and will use it for serving requests no matter which “Task” you select.
The UI will automatically select the preferred instance type for us after it recognizes the model. We can create you endpoint by clicking on “Create Endpoint”. This will now create our image artifact and then deploy our model, this should take a few minutes.
## 3. Integrate Stable Diffusion Inpainting as API and send HTTP requests using Python
Hugging Face Inference endpoints can directly work with binary data, this means that we can directly send our image from our document to the endpoint. We are going to use **`requests`** to send our requests. (make your you have it installed **`pip install requests`**). We need to replace the `ENDPOINT_URL` and `HF_TOKEN` with our values and then can send a request. Since we are using it as an API, we need to provide our base image, an image with the mask, and our text prompt.
```python
import json
from typing import List
import requests as r
import base64
from PIL import Image
from io import BytesIO
ENDPOINT_URL = ""
HF_TOKEN = ""
# helper image utilsdef encode_image(image_path):
with open(image_path, "rb") as i:
b64 = base64.b64encode(i.read())
return b64.decode("utf-8")
def predict(prompt, image, mask_image):
image = encode_image(image)
mask_image = encode_image(mask_image)
# prepare sample payload
payload = {"inputs": prompt, "image": image, "mask_image": mask_image}
# headers
headers = {
"Authorization": f"Bearer {HF_TOKEN}",
"Content-Type": "application/json",
"Accept": "image/png"# important to get an image back
}
response = r.post(ENDPOINT_URL, headers=headers, json=payload)
img = Image.open(BytesIO(response.content))
return img
prediction = predict(
prompt="Face of a bengal cat, high resolution, sitting on a park bench",
image="dog.png",
mask_image="mask_dog.png"
)
```
If you want to test an example, you can find the `dog.png` and `mask_dog.png` in the repository at: [https://huggingface.co/philschmid/stable-diffusion-2-inpainting-endpoint/tree/main](https://huggingface.co/philschmid/stable-diffusion-2-inpainting-endpoint/tree/main)
The result of the request should be a `PIL` image we can display:
![result](/static/blog/stable-diffusion-inpainting-inference-endpoints/result.png)
## Conclusion
We successfully created and deployed a Stable Diffusion Inpainting inference handler to Hugging Face Inference Endpoints in less than 30 minutes.
Having scalable, secure API Endpoints will allow you to move from the experimenting (space) to integrated production workloads, e.g., Javascript Frontend/Desktop App and API Backend.
Now, it's your turn! **[Sign up](https://ui.endpoints.huggingface.co/new)** and create your custom handler within a few minutes!
---
Thanks for reading! If you have any questions, feel free to contact me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Combine Amazon SageMaker and DeepSpeed to fine-tune FLAN-T5 XXL | https://www.philschmid.de/sagemaker-deepspeed | 2023-02-22 | [
"T5",
"DeepSpeed",
"HuggingFace",
"SageMaker"
] | Learn how to fine-tune Google's FLAN-T5 XXL on Amazon SageMaker using DeepSpeed and Hugging Face Transformers. | FLAN-T5, released with the[Scaling Instruction-Finetuned Language Models](https://arxiv.org/pdf/2210.11416.pdf)paper, is an enhanced version of T5 that has been fine-tuned in a mixture of tasks, or simple words, a better T5 model in any aspect. FLAN-T5 outperforms T5 by double-digit improvements for the same number of parameters. Google has open sourced [5 checkpoints available on Hugging Face](https://huggingface.co/models?other=arxiv:2210.11416) ranging from 80M parameter up to 11B parameter.
In a previous blog post, we already learned how to [“Fine-tune FLAN-T5 XL/XXL using DeepSpeed & Hugging Face Transformers”](https://www.philschmid.de/fine-tune-flan-t5-deepspeed). In this blog post, we look into how we can integrate DeepSpeed into Amazon SageMaker to allow any practitioners to train those billion parameter size models with a simple API call. Amazon SageMaker managed training allows you to train large language models without having to manage the underlying infrastructure. You can find more information about Amazon SageMaker in the [documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/whatis.html).
This means we will learn how to fine-tune FLAN-T5 XL & XXL using model parallelism, multiple GPUs, and [DeepSpeed ZeRO](https://www.deepspeed.ai/tutorials/zero/) on Amazon SageMaker.
The blog post is structured as follows:
1. [Process dataset and upload to S3](#1-process-dataset-and-upload-to-s3)
2. [Prepare training script and deepspeed launcher](#2-prepare-training-script-and-deepspeed-launcher)
3. [Fine-tune FLAN-T5 XXL on Amazon SageMaker](#3-fine-tune-flan-t5-xxl-on-amazon-sagemaker)
before we start, let’s install the required libraries and make sure we have the correct permissions to access S3.
```python
!pip install "transformers==4.26.0" "datasets[s3]==2.9.0" sagemaker --upgrade
```
If you are going to use Sagemaker in a local environment. You need access to an IAM Role with the required permissions for Sagemaker. You can find [here](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) more about it.
```python
import sagemaker
import boto3
sess = sagemaker.Session()
# sagemaker session bucket -> used for uploading data, models and logs
# sagemaker will automatically create this bucket if it not exists
sagemaker_session_bucket=None
if sagemaker_session_bucket is None and sess is not None:
# set to default bucket if a bucket name is not given
sagemaker_session_bucket = sess.default_bucket()
try:
role = sagemaker.get_execution_role()
except ValueError:
iam = boto3.client('iam')
role = iam.get_role(RoleName='sagemaker_execution_role')['Role']['Arn']
sess = sagemaker.Session(default_bucket=sagemaker_session_bucket)
print(f"sagemaker role arn: {role}")
print(f"sagemaker bucket: {sess.default_bucket()}")
print(f"sagemaker session region: {sess.boto_region_name}")
```
## 1. Process dataset and upload to S3
Similar to the [“Fine-tune FLAN-T5 XL/XXL using DeepSpeed & Hugging Face Transformers”](https://www.philschmid.de/fine-tune-flan-t5-deepspeed) we need to prepare a dataset to fine-tune our model. As mentioned in the beginning, we will fine-tune [FLAN-T5-XXL](https://huggingface.co/google/flan-t5-xxl) on the [CNN Dailymail Dataset](https://huggingface.co/datasets/cnn_dailymail). The blog post is not going into detail about the dataset generation. If you want to learn the detailed steps check out the [previous post](https://www.philschmid.de/fine-tune-flan-t5).
We define some parameters, which we use throughout the whole example, feel free to adjust it to your needs.
```python
# experiment config
model_id = "google/flan-t5-xxl" # Hugging Face Model Id
dataset_id = "cnn_dailymail" # Hugging Face Dataset Id
dataset_config = "3.0.0" # config/verison of the dataset
save_dataset_path = "data" # local path to save processed dataset
text_column = "article" # column of input text is
summary_column = "highlights" # column of the output text
# custom instruct prompt start
prompt_template = f"Summarize the following news article:\n{{input}}\nSummary:\n"
```
Compared to the [previous example](https://www.philschmid.de/fine-tune-flan-t5), we are splitting the processing and training into two separate paths. This allows you to run the preprocessing outside of the managed SageMaker Training job. We process (tokenize) the dataset and upload to s3 and pass it into our managed Training job.
```python
from datasets import load_dataset
from transformers import AutoTokenizer
import numpy as np
# Load dataset from the hub
dataset = load_dataset(dataset_id,name=dataset_config)
# Load tokenizer of FLAN-t5-base
tokenizer = AutoTokenizer.from_pretrained(model_id)
print(f"Train dataset size: {len(dataset['train'])}")
print(f"Test dataset size: {len(dataset['test'])}")
# Train dataset size: 287113
# Test dataset size: 11490
```
We defined a `prompt_template` in our config, which we will use to construct an instruct prompt for better performance of our model. Our `prompt_template` has a “fixed” start and end, and our document is in the middle. This means we need to ensure that the “fixed” template parts + document are not exceeding the max length of the model. Therefore we calculate the max length of our document, which we will later use for padding and truncation
```python
prompt_lenght = len(tokenizer(prompt_template.format(input=""))["input_ids"])
max_sample_length = tokenizer.model_max_length - prompt_lenght
print(f"Prompt length: {prompt_lenght}")
print(f"Max input length: {max_sample_length}")
# Prompt length: 12
# Max input length: 500
```
We know now that our documents can be “500” tokens long to fit our `template_prompt` still correctly. In addition to our input, we need to understand better our “target” sequence length meaning and how long are the summarization ins our dataset. Therefore we iterate over the dataset and calculate the max input length (at max 500) and the max target length. (takes a few minutes)
```python
from datasets import concatenate_datasets
import numpy as np
# The maximum total input sequence length after tokenization.
# Sequences longer than this will be truncated, sequences shorter will be padded.
tokenized_inputs = concatenate_datasets([dataset["train"], dataset["test"]]).map(lambda x: tokenizer(x[text_column], truncation=True), batched=True, remove_columns=[text_column, summary_column])
max_source_length = max([len(x) for x in tokenized_inputs["input_ids"]])
max_source_length = min(max_source_length, max_sample_length)
print(f"Max source length: {max_source_length}")
# The maximum total sequence length for target text after tokenization.
# Sequences longer than this will be truncated, sequences shorter will be padded."
tokenized_targets = concatenate_datasets([dataset["train"], dataset["test"]]).map(lambda x: tokenizer(x[summary_column], truncation=True), batched=True, remove_columns=[text_column, summary_column])
target_lenghts = [len(x) for x in tokenized_targets["input_ids"]]
# use 95th percentile as max target length
max_target_length = int(np.percentile(target_lenghts, 95))
print(f"Max target length: {max_target_length}")
```
We now have everything needed to process our dataset.
```python
def preprocess_function(sample, padding="max_length"):
# created prompted input
inputs = [prompt_template.format(input=item) for item in sample[text_column]]
# tokenize inputs
model_inputs = tokenizer(inputs, max_length=max_source_length, padding=padding, truncation=True)
# Tokenize targets with the `text_target` keyword argument
labels = tokenizer(text_target=sample[summary_column], max_length=max_target_length, padding=padding, truncation=True)
# If we are padding here, replace all tokenizer.pad_token_id in the labels by -100 when we want to ignore
# padding in the loss.
if padding == "max_length":
labels["input_ids"] = [
[(l if l != tokenizer.pad_token_id else -100) for l in label] for label in labels["input_ids"]
]
model_inputs["labels"] = labels["input_ids"]
return model_inputs
# process dataset
tokenized_dataset = dataset.map(preprocess_function, batched=True, remove_columns=list(dataset["train"].features))
```
After we processed the datasets we are going to use the new [FileSystem integration](https://huggingface.co/docs/datasets/filesystems) to upload our dataset to S3. We are using the `sess.default_bucket()`, adjust this if you want to store the dataset in a different S3 bucket. We will use the S3 path later in our training script.
```python
# save train_dataset to s3
training_input_path = f's3://{sess.default_bucket()}/processed/{dataset_id}/train'
tokenized_dataset["train"].save_to_disk(training_input_path)
# save test_dataset to s3
test_input_path = f's3://{sess.default_bucket()}/processed/{dataset_id}/test'
tokenized_dataset["test"].save_to_disk(test_input_path)
print("uploaded data to:")
print(f"training dataset to: {training_input_path}")
print(f"test dataset to: {test_input_path}")
```
## 2. Prepare training script and deepspeed launcher
Done! The last step before we start training our is to prepare our training script and `deepspeed`. We learned in the introduction that we would leverage the DeepSpeed integration with the Hugging Face Trainer. In the [previous post](https://www.philschmid.de/fine-tune-flan-t5-deepspeed) we used the `deepspeed` launcher to start our training on multiple GPUs. As of today Amazon SageMaker does not support the `deepspeed` launcher. 😒
To overcome this limitation, we need to create a custom launcher [ds_launcher.py](https://github.com/philschmid/deepspeed-sagemaker-example/blob/main/ds_launcher.py). The launcher is a simple python script, which we will pass to our training script. The launcher will start the real training script with the correct environment variables and parameters. In addition, we need to create a `deepspeed_config.json` to configure our training setup. In the [“Fine-tune FLAN-T5 XL/XXL using DeepSpeed & Hugging Face Transformers”](https://www.philschmid.de/fine-tune-flan-t5-deepspeed) post we created 4 deepspeed configurations for the experiments we ran, including `CPU offloading` and `mixed precision`:
- [ds_flan_t5_z3_config.json](https://github.com/philschmid/deepspeed-sagemaker-example/blob/main/configs/ds_flan_t5_z3_config.json)
- [ds_flan_t5_z3_config_bf16.json](https://github.com/philschmid/deepspeed-sagemaker-example/blob/main/configs/ds_flan_t5_z3_config_bf16.json)
- [ds_flan_t5_z3_offload.json](https://github.com/philschmid/deepspeed-sagemaker-example/blob/main/configs/ds_flan_t5_z3_offload.json)
- [ds_flan_t5_z3_offload_bf16.json](https://github.com/philschmid/deepspeed-sagemaker-example/blob/main/configs/ds_flan_t5_z3_offload_bf16.json)
Depending on your setup, you can use those, e.g. if you are running on NVIDIA V100s, you have to use the config without `bf16` since V100 are not support `bfloat16` types.
> When fine-tuning `T5` models we cannot use `fp16` since it leads to overflow issues, see: [#4586](https://github.com/huggingface/transformers/issues/4586), [#10830](https://github.com/huggingface/transformers/issues/10830), [#10956](https://github.com/huggingface/transformers/pull/10956)
We are going to use a p4dn.24xlarge AWS EC2 Instance including 8x NVIDIA A100 40GB. This means we can leverage `bf16`, which reduces the memory footprint of the model by almost ~2x, which allows us to train without offloading efficiently.
We are going to use the [ds_flan_t5_z3_config_bf16.json](https://github.com/philschmid/deepspeed-sagemaker-example/blob/main/configs/ds_flan_t5_z3_config_bf16.json). If you are irritated by the `auto` values, check the [documentation](https://huggingface.co/docs/transformers/v4.26.1/en/main_classes/deepspeed#configuration).
```python
deepspeed_parameters = {
"deepspeed": "./configs/ds_flan_t5_z3_config_bf16.json", # deepspeed config file
"training_script": "./scripts/run_seq2seq_deepspeed.py" # real training script, not entrypoint
}
```
## 3. Fine-tune FLAN-T5 XXL on Amazon SageMaker
In addition to our `deepspeed_parameters` we need to define the `training_hyperparameters` for our training script. The `training_hyperparameters` are passed to our `training_script` as CLI arguments with `--key value`.
If you want to better understand which batch_size and `deepspeed_config` can work which hardware setup you can check out the [Results & Experiments](https://www.philschmid.de/fine-tune-flan-t5-deepspeed#3-results--experiments) we ran.
```python
from sagemaker.huggingface import HuggingFace
# hyperparameters, which are passed into the training job
training_hyperparameters={
'model_id': model_id, # pre-trained model
'train_dataset_path': '/opt/ml/input/data/training', # path where sagemaker will save training dataset
'test_dataset_path': '/opt/ml/input/data/test', # path where sagemaker will save test dataset
'epochs': 3, # number of training epochs
'per_device_train_batch_size': 8, # batch size for training
'per_device_eval_batch_size': 8, # batch size for evaluation
'learning_rate': 1e-4, # learning rate used during training
'generation_max_length': max_target_length, # max length of generated summary
}
```
In order to create a sagemaker training job we need an `HuggingFace` Estimator. The Estimator then creates our Amazon SageMaker training. Amazon SagMaker takes care of starting and managing our ec2 instances, provides the correct huggingface container, uploads the provided scripts and downloads the data from our S3 bucket into the container at /opt/ml/input/data.
```python
import time
# define Training Job Name
job_name = f'huggingface-deepspeed-{time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime())}'
# create the Estimator
huggingface_estimator = HuggingFace(
entry_point = 'ds_launcher.py', # deepspeed launcher script
source_dir = '.', # directory which includes all the files needed for training
instance_type = 'ml.p4d.24xlarge', # instances type used for the training job
instance_count = 1, # the number of instances used for training
base_job_name = job_name, # the name of the training job
role = role, # Iam role used in training job to access AWS ressources, e.g. S3
volume_size = 300, # the size of the EBS volume in GB
transformers_version = '4.17', # the transformers version used in the training job
pytorch_version = '1.10', # the pytorch_version version used in the training job
py_version = 'py38', # the python version used in the training job
hyperparameters = {
**training_hyperparameters,
**deepspeed_parameters
}, # the hyperparameter used for running the training job
)
```
We created our `HuggingFace` estimator including the `ds_launcher.py` as `entry_point` and defined our `deepspeed` config and `training_script` in the `deepspeed_parameters`, which we merged with our `training_hyperparameters`. We can now start our training job, with the `.fit()` method passing our S3 path to the training script.
```python
# define a data input dictonary with our uploaded s3 uris
data = {
'training': training_input_path,
'test': test_input_path
}
# starting the train job with our uploaded datasets as input
huggingface_estimator.fit(data, wait=True)
```
If you want to deploy your model to a SageMaker Endpoint, you can check out the [Deploy FLAN-T5 XXL on Amazon SageMaker](https://www.philschmid.de/deploy-flan-t5-sagemaker) blog.
---
Thanks for reading! If you have any questions, feel free to contact me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Financial Text Summarization with Hugging Face Transformers, Keras & Amazon SageMaker | https://www.philschmid.de/financial-summarizatio-huggingface-keras | 2022-01-19 | [
"HuggingFace",
"Keras",
"SageMaker",
"Tensorflow"
] | Learn how to fine-tune a a Hugging Face Transformer for Financial Text Summarization using vanilla `Keras`, `Tensorflow` , `Transformers`, `Datasets` & Amazon SageMaker. | Welcome to this end-to-end Financial Summarization (NLP) example using Keras and Hugging Face Transformers. In this demo, we will use the Hugging Faces `transformers` and `datasets` library together with `Tensorflow` & `Keras` to fine-tune a pre-trained seq2seq transformer for financial summarization.
We are going to use the [Trade the Event](https://paperswithcode.com/paper/trade-the-event-corporate-events-detection) dataset for abstractive text summarization. The benchmark dataset contains 303893 news articles range from 2020/03/01 to 2021/05/06. The articles are downloaded from the [PRNewswire](https://www.prnewswire.com/) and [Businesswire](https://www.businesswire.com/).
More information for the dataset can be found at the [repository](https://github.com/Zhihan1996/TradeTheEvent/tree/main/data).
We are going to use all of the great Feature from the Hugging Face ecosystem like model versioning and experiment tracking as well as all the great features of Keras like Early Stopping and Tensorboard.
You can find the notebook and scripts in this repository: [philschmid/keras-financial-summarization-huggingfacePublic](https://github.com/philschmid/keras-financial-summarization-huggingface).
## Installation
```python
!pip install git+https://github.com/huggingface/transformers.git@master --upgrade
```
```python
#!pip install "tensorflow==2.6.0"
!pip install transformers "datasets>=1.17.0" tensorboard rouge_score nltk --upgrade
# install gdown for downloading the dataset
!pip install gdown
```
install `git-lfs` to push models to hf.co/models
```python
!sudo apt-get install git-lfs
```
This example will use the [Hugging Face Hub](https://huggingface.co/models) as a remote model versioning service. To be able to push our model to the Hub, you need to register on the [Hugging Face](https://huggingface.co/join).
If you already have an account you can skip this step.
After you have an account, we will use the `notebook_login` util from the `huggingface_hub` package to log into our account and store our token (access key) on the disk.
```python
from huggingface_hub import notebook_login
notebook_login()
```
## Setup & Configuration
In this step, we will define global configurations and parameters, which are used across the whole end-to-end fine-tuning process, e.g. `tokenizer` and `model` we will use.
In this example are we going to fine-tune the [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6) a distilled version of the [BART](https://arxiv.org/abs/1910.13461) transformer. Since the original repository didn't include Keras weights I converted the model to Keras using `from_pt=True`, when loading the model.
```python
model_id = "philschmid/tf-distilbart-cnn-12-6"
```
You can easily adjust the `model_id` to another Vision Transformer model, e.g. `google/pegasus-xsum`
## Dataset & Pre-processing
We are going to use the [Trade the Event](https://paperswithcode.com/paper/trade-the-event-corporate-events-detection) dataset for abstractive text summarization. The benchmark dataset contains 303893 news articles range from 2020/03/01 to 2021/05/06. The articles are downloaded from the [PRNewswire](https://www.prnewswire.com/) and [Businesswire](https://www.businesswire.com/).
The we will use the column `text` as `INPUT` and `title` as summarization `TARGET`.
**sample**
```json
{
"text": "PLANO, Texas, Dec. 8, 2020 /PRNewswire/ --European Wax Center(EWC), the leading personal care franchise brand that offers expert wax services from certified specialists is proud to welcome a new Chief Financial Officer, Jennifer Vanderveldt. In the midst of European Wax Center\"s accelerated growth plan, Jennifer will lead the Accounting and FP&A teams to continue to widen growth and organizational initiatives. (PRNewsfoto/European Wax Center) ...",
"title": "European Wax Center Welcomes Jennifer Vanderveldt As Chief Financial Officer",
"pub_time": "2020-12-08 09:00:00-05:00",
"labels": {
"ticker": "MIK",
"start_time": "2020-12-08 09:00:00-05:00",
"start_price_open": 12.07,
"start_price_close": 12.07,
"end_price_1day": 12.8,
"end_price_2day": 12.4899,
"end_price_3day": 13.0,
"end_time_1day": "2020-12-08 19:11:00-05:00",
"end_time_2day": "2020-12-09 18:45:00-05:00",
"end_time_3day": "2020-12-10 19:35:00-05:00",
"highest_price_1day": 13.2,
"highest_price_2day": 13.2,
"highest_price_3day": 13.2,
"highest_time_1day": "2020-12-08 10:12:00-05:00",
"highest_time_2day": "2020-12-08 10:12:00-05:00",
"highest_time_3day": "2020-12-08 10:12:00-05:00",
"lowest_price_1day": 11.98,
"lowest_price_2day": 11.98,
"lowest_price_3day": 11.98,
"lowest_time_1day": "2020-12-08 09:13:00-05:00",
"lowest_time_2day": "2020-12-08 09:13:00-05:00",
"lowest_time_3day": "2020-12-08 09:13:00-05:00"
}
}
```
The `TradeTheEvent` is not yet available as a dataset in the `datasets` library. To be able to create a `Dataset` instance we need to write a small little helper function, which converts the downloaded `.json` to a `jsonl` file to then be then loaded with `load_dataset`.
As a first step, we need to download the dataset to our filesystem using `gdown`.
```python
!gdown "https://drive.google.com/u/0/uc?export=download&confirm=2rTA&id=130flJ0u_5Ox5D-pQFa5lGiBLqILDBmXX"
```
We should now have a file called `evluate_news.jsonl` in our filesystem and can write a small helper function to convert the `.json` to a `jsonl` file.
```python
src_path="evaluate_news.json"
target_path='tde_dataset.jsonl'
import json
with open(src_path,"r+") as f, open(target_path, 'w') as outfile:
JSON_file = json.load(f)
for entry in JSON_file:
json.dump(entry, outfile)
outfile.write('\n')
```
We can now remove the `evaluate_news.json` to save some space and avoid confusion.
```python
!rm -rf evaluate_news.json
```
To load our dataset we can use the `load_dataset` function from the `datasets` library.
```python
from datasets import load_dataset
target_path='tde_dataset.jsonl'
ds = load_dataset('json', data_files=target_path)
```
### Pre-processing & Tokenization
To train our model we need to convert our "Natural Language" to token IDs. This is done by a 🤗 Transformers Tokenizer which will tokenize the inputs (including converting the tokens to their corresponding IDs in the pretrained vocabulary). If you are not sure what this means check out [chapter 6](https://huggingface.co/course/chapter6/1?fw=tf) of the Hugging Face Course.
before we tokenize our dataset we remove all of the unused columns for the summarization task to save some time and storage.
```python
to_remove_columns = ["pub_time","labels"]
ds = ds["train"].remove_columns(to_remove_columns)
```
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id)
```
Compared to a `text-classification` in `summarization` our labels are also text. This means we need to apply truncation to both the `text` and title `title` to ensure we don’t pass excessively long inputs to our model. The tokenizers in 🤗 Transformers provide a nifty `as_target_tokenizer()` function that allows you to tokenize the labels in parallel to the inputs.
In addition to this we define values for `max_input_length` (maximum lenght before the text is trubcated) and `max_target_length` (maximum lenght for the summary/prediction).
```python
max_input_length = 512
max_target_length = 64
def preprocess_function(examples):
model_inputs = tokenizer(
examples["text"], max_length=max_input_length, truncation=True
)
# Set up the tokenizer for targets
with tokenizer.as_target_tokenizer():
labels = tokenizer(
examples["title"], max_length=max_target_length, truncation=True
)
model_inputs["labels"] = labels["input_ids"]
return model_inputs
tokenized_datasets = ds.map(preprocess_function, batched=True)
```
Since our dataset doesn't includes any split we need to `train_test_split` ourself to have an evaluation/test dataset for evaluating the result during and after training.
```python
# test size will be 15% of train dataset
test_size=.15
processed_dataset = tokenized_datasets.shuffle().train_test_split(test_size=test_size)
```
## Fine-tuning the model using `Keras`
Now that our `dataset` is processed, we can download the pretrained model and fine-tune it. But before we can do this we need to convert our Hugging Face `datasets` Dataset into a `tf.data.Dataset`. For this, we will use the `.to_tf_dataset` method and a `data collator` (Data collators are objects that will form a batch by using a list of dataset elements as input).
### Hyperparameter
```python
from huggingface_hub import HfFolder
import tensorflow as tf
num_train_epochs = 5
train_batch_size = 8
eval_batch_size = 8
learning_rate = 5.6e-5
weight_decay_rate=0.01
num_warmup_steps=0
output_dir=model_id.split("/")[1]
hub_token = HfFolder.get_token() # or your token directly "hf_xxx"
hub_model_id = f'{model_id.split("/")[1]}-tradetheevent'
fp16=True
# Train in mixed-precision float16
# Comment this line out if you're using a GPU that will not benefit from this
if fp16:
tf.keras.mixed_precision.set_global_policy("mixed_float16")
```
### Converting the dataset to a `tf.data.Dataset`
to create our `tf.data.Dataset` we need to download the model to be able to initialize our data collator.
```python
from transformers import TFAutoModelForSeq2SeqLM
# load pre-trained model
model = TFAutoModelForSeq2SeqLM.from_pretrained(model_id)
```
to convert our dataset we use the `.to_tf_dataset` method.
```python
from transformers import DataCollatorForSeq2Seq
# Data collator that will dynamically pad the inputs received, as well as the labels.
data_collator = DataCollatorForSeq2Seq(tokenizer, model=model, return_tensors="tf")
# converting our train dataset to tf.data.Dataset
tf_train_dataset = processed_dataset["train"].to_tf_dataset(
columns=["input_ids", "attention_mask", "labels"],
shuffle=True,
batch_size=train_batch_size,
collate_fn=data_collator)
# converting our test dataset to tf.data.Dataset
tf_eval_dataset = processed_dataset["test"].to_tf_dataset(
columns=["input_ids", "attention_mask", "labels"],
shuffle=True,
batch_size=eval_batch_size,
collate_fn=data_collator)
```
### Create optimizer and compile the model
```python
from transformers import create_optimizer
import tensorflow as tf
# create optimizer wight weigh decay
num_train_steps = len(tf_train_dataset) * num_train_epochs
optimizer, lr_schedule = create_optimizer(
init_lr=learning_rate,
num_train_steps=num_train_steps,
weight_decay_rate=weight_decay_rate,
num_warmup_steps=num_warmup_steps,
)
# compile model
model.compile(optimizer=optimizer)
```
### Callbacks
As mentioned in the beginning we want to use the [Hugging Face Hub](https://huggingface.co/models) for model versioning and monitoring. Therefore we want to push our model weights, during training and after training to the Hub to version it.
Additionally, we want to track the performance during training therefore we will push the `Tensorboard` logs along with the weights to the Hub to use the "Training Metrics" Feature to monitor our training in real-time.
```python
import os
from transformers.keras_callbacks import PushToHubCallback
from tensorflow.keras.callbacks import TensorBoard as TensorboardCallback
callbacks=[]
callbacks.append(TensorboardCallback(log_dir=os.path.join(output_dir,"logs")))
if hub_token:
callbacks.append(PushToHubCallback(output_dir=output_dir,
tokenizer=tokenizer,
hub_model_id=hub_model_id,
hub_token=hub_token))
```
![tensorboard](/static/blog/financial-summarizatio-huggingface-keras/tensorboard.png)
You can find the the Tensorboard on the Hugging Face Hub at your model repository at [Training Metrics](https://huggingface.co/philschmid/tf-distilbart-cnn-12-6-tradetheevent/tensorboard). We can clearly see that the experiment I ran is not perfect since the validation loss increases again after time. But this is a good example of how to use the `Tensorboard` callback and the Hugging Face Hub. As a next step i would probably switch to Amazon SageMaker and run multiple experiments with the Tensorboard integration and EarlyStopping to find the best hyperparameters.
## Training
Start training with calling `model.fit`
```python
train_results = model.fit(
tf_train_dataset,
validation_data=tf_eval_dataset,
callbacks=callbacks,
epochs=num_train_epochs,
)
```
## Evaluation
The most commonly used metrics to evaluate summarization task is [rogue_score](<https://en.wikipedia.org/wiki/ROUGE_(metric)>) short for Recall-Oriented Understudy for Gisting Evaluation). This metric does not behave like the standard accuracy: it will compare a generated summary against a set of reference summaries
```python
from datasets import load_metric
from tqdm import tqdm
import numpy as np
import nltk
nltk.download("punkt")
from nltk.tokenize import sent_tokenize
metric = load_metric("rouge")
def evaluate(model, dataset):
all_predictions = []
all_labels = []
for batch in tqdm(dataset):
predictions = model.generate(batch["input_ids"])
decoded_preds = tokenizer.batch_decode(predictions, skip_special_tokens=True)
labels = batch["labels"].numpy()
labels = np.where(labels != -100, labels, tokenizer.pad_token_id)
decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)
decoded_preds = ["\n".join(sent_tokenize(pred.strip())) for pred in decoded_preds]
decoded_labels = ["\n".join(sent_tokenize(label.strip())) for label in decoded_labels]
all_predictions.extend(decoded_preds)
all_labels.extend(decoded_labels)
result = metric.compute(
predictions=decoded_preds, references=decoded_labels, use_stemmer=True
)
result = {key: value.mid.fmeasure * 100 for key, value in result.items()}
return {k: round(v, 4) for k, v in result.items()}
results = evaluate(model, tf_eval_dataset)
```
## Run Managed Training using Amazon Sagemaker
If you want to run this examples on Amazon SageMaker to benefit from the Training Platform follow the cells below. I converted the Notebook into a python script [train.py](./scripts/train.py), which accepts same hyperparameter and can we run on SageMaker using the `HuggingFace` estimator.
install SageMaker and gdown.
```python
#!pip install sagemaker gdown
```
Download the dataset and convert it to jsonlines.
```python
!gdown "https://drive.google.com/u/0/uc?export=download&confirm=2rTA&id=130flJ0u_5Ox5D-pQFa5lGiBLqILDBmXX"
src_path="evaluate_news.json"
target_path='tde_dataset.jsonl'
import json
with open(src_path,"r+") as f, open(target_path, 'w') as outfile:
JSON_file = json.load(f)
for entry in JSON_file:
json.dump(entry, outfile)
outfile.write('\n')
```
As next step we create a SageMaker session to start our training. The snippet below works in Amazon SageMaker Notebook Instances or Studio. If you are running in a local environment check-out the [documentation](https://huggingface.co/docs/sagemaker/train#installation-and-setup) for how to initialize your session.
```python
import sagemaker
sess = sagemaker.Session()
# sagemaker session bucket -> used for uploading data, models and logs
# sagemaker will automatically create this bucket if it not exists
sagemaker_session_bucket=None
if sagemaker_session_bucket is None and sess is not None:
# set to default bucket if a bucket name is not given
sagemaker_session_bucket = sess.default_bucket()
role = sagemaker.get_execution_role()
sess = sagemaker.Session(default_bucket=sagemaker_session_bucket)
print(f"sagemaker role arn: {role}")
print(f"sagemaker bucket: {sess.default_bucket()}")
print(f"sagemaker session region: {sess.boto_region_name}")
```
Now, we can define our `HuggingFace` estimator and Hyperparameter.
```python
from sagemaker.huggingface import HuggingFace
# gets role for executing training job
role = sagemaker.get_execution_role()
hyperparameters = {
'model_id': 'philschmid/tf-distilbart-cnn-12-6',
'dataset_file_name': 'tde_dataset.jsonl',
'num_train_epochs': 5,
'train_batch_size': 8,
'eval_batch_size': 8,
'learning_rate': 5e-5,
'weight_decay_rate': 0.01,
'num_warmup_steps': 0,
'hub_token': HfFolder.get_token(),
'hub_model_id': 'sagemaker-tf-distilbart-cnn-12-6',
'fp16': True
}
# creates Hugging Face estimator
huggingface_estimator = HuggingFace(
entry_point='train.py',
source_dir='./scripts',
instance_type='ml.p3.2xlarge',
instance_count=1,
role=role,
transformers_version='4.12.3',
tensorflow_version='2.5.1',
py_version='py36',
hyperparameters = hyperparameters
)
```
Opload our raw dataset to s3
```python
from sagemaker.s3 import S3Uploader
dataset_uri = S3Uploader.upload(local_path="tde_dataset.jsonl",desired_s3_uri=f"s3://{sess.default_bucket()}/TradeTheEvent/tde_dataset.jsonl")
```
After the dataset is uploaded we can start the training a pass our `s3_uri` as argument.
```python
# starting the train job
huggingface_estimator.fit({"dataset": dataset_uri})
```
## Conclusion
We managed to successfully fine-tune a Seq2Seq BART Transformer using Transformers and Keras, without any heavy lifting or complex and unnecessary boilerplate code. The new utilities like `.to_tf_dataset` are improving the developer experience of the Hugging Face ecosystem to become more Keras and TensorFlow friendly. Combining those new features with the Hugging Face Hub we get a fully-managed MLOps pipeline for model-versioning and experiment management using Keras callback API. Through SageMaker we could easily scale our Training. This was especially helpful since the training takes 10-12h depending on how many epochs we ran.
---
You can find the code [here](https://github.com/philschmid/keras-financial-summarization-huggingface) and feel free to open a thread on the [forum](https://discuss.huggingface.co/).
Thanks for reading. If you have any questions, feel free to contact me, through [Github](https://github.com/huggingface/transformers), or on the [forum](https://discuss.huggingface.co/c/sagemaker/17). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Managed Transcription with OpenAI Whisper and Hugging Face Inference Endpoints | https://www.philschmid.de/whisper-inference-endpoints | 2022-12-20 | [
"Whisper",
"Transcription",
"HuggingFace",
"Inference"
] | Learn how to deploy OpenAI Whisper for speech recognition and transcription using Hugging Face Inference Endpoints. | In September, [OpenAI](hhttps://openai.com/blog/whisper/) announced and released [Whisper](https://openai.com/blog/whisper/), an automatic speech recognition (ASR) system trained on 680,000 hours of audio. Whisper achieved state-of-art performance and changed the status quo of speech recognition and transcription from one to the other day.
Whisper is a Seq2Seq Transformer model trained for speech recognition (transcription) and translation, allowing it to transcribe audio to text in the same language but also allowing it to transcribe/translate the audio from one language into the text of another language. [OpenAI released 10 model checkpoints](https://huggingface.co/models?other=whisper) from a tiny (39M) to a large (1.5B) version of Whisper.
![whisper-architecture](/static/blog/whisper-inference-endpoints/whisper.png)
- Paper: [https://cdn.openai.com/papers/whisper.pdf](https://cdn.openai.com/papers/whisper.pdf)
- Official repo: [https://github.com/openai/whisper/tree/main](https://github.com/openai/whisper/tree/main)
Whisper large-v2 achieves state-of-art performance on the [Fleurs dataset](https://huggingface.co/datasets/google/fleurs) from Google on Speech Recognition (ASR), including a WER of `4.2` for English. You can find the detailed results in [the paper](https://arxiv.org/abs/2212.04356).
**Now, we know how awesome Whisper for speech recognition is, but how can I deploy it and use it to add transcription capabilities to my applications and workflows?**
Moving machine learning models, especially transformers, from a notebook environment to a production environment is challenging. Do you need to consider security, scaling, monitoring, CI/CD …. which is a lot of effort when building it from scratch and by yourself.
That is why we have built [Hugging Face inference endpoints](https://huggingface.co/inference-endpoints), our managed inference service to easily deploy Transformers, Diffusers, or any model on dedicated, fully managed infrastructure. Keep your costs low with our secure, compliant, and flexible production solution.
---
## Deploy Whisper as Inference Endpoint
In this blog post, we will show you how to deploy OpenAI Whisper with [Hugging Face Inference Endpoints](https://huggingface.co/inference-endpoints) for scalable, secure, and efficient speech transcription API.
The tutorial will cover how to:
1. [Create an Inference Endpoint with `openai/whisper-large-v2`](#1-create-an-inference-endpoint-with-openaiwhisper-large-v2)
2. [Integrate the Whisper endpoint into applications using Python and Javascript](#2-integrate-the-whisper-endpoint-into-applications-using-python-and-javascript)
3. [Cost-performance comparison between Inference Endpoints and Amazon Transcribe and Google Cloud Speech-to-text](#3-cost-performance-comparison-between-inference-endpoints-and-amazon-transcribe-and-google-cloud-speech-to-text)
Before we can get started, make sure you meet all of the following requirements:
1. An Organization/User with an active credit card. (Add billing [here](https://huggingface.co/settings/billing))
2. You can access the UI at: [https://ui.endpoints.huggingface.co](https://ui.endpoints.huggingface.co/endpoints)
## 1. Create an Inference Endpoint with `openai/whisper-large-v2`
In this tutorial, you will learn how to deploy [OpenAI Whisper](https://huggingface.co/openai/whisper-large-v2) from the [Hugging Face Hub](https://huggingface.co/models?other=stable-diffusion) to [Hugging Face Inference Endpoints](https://huggingface.co/inference-endpoints).
You can access the UI of [Inference Endpoints](https://huggingface.co/inference-endpoints) directly at: [https://ui.endpoints.huggingface.co/](https://ui.endpoints.huggingface.co/) or through the [Landingpage](https://huggingface.co/inference-endpoints).
The first step is to deploy our model as an Inference Endpoint. Therefore we click “new endpoint” and add the Hugging face repository Id of the Whisper model we want to deploy. In our case, it is `openai/whisper-large-v2`.
![repository](/static/blog/whisper-inference-endpoints/repository.png)
_Note: By default, Inference Endpoint will use “English” as the language for transcription, if you want to use Whisper for non-English speech recognition you would need to create a c[ustom handler](https://huggingface.co/docs/inference-endpoints/guides/custom_handler) and adjust [decoder prompt](https://huggingface.co/spaces/whisper-event/whisper-demo/blob/main/app.py#L19)._
Now, we can make changes to the provider, region, or instance we want to use, as well as configure the security level of our endpoint. The easiest is to keep the suggested defaults from the application.
![instance-settings](/static/blog/whisper-inference-endpoints/instance-settings.png)
We can deploy our model by clicking on the “Create Endpoint” button. Once we click the “create” button, [Inference Endpoints](https://huggingface.co/inference-endpoints) will create a dedicated container with the model and start our resources. After a few minutes, our endpoint is up and running.
We can test our Whisper model directly in the UI, by either recording our microphone or uploading a small audio file, e.g. [sample](https://cdn-media.huggingface.co/speech_samples/sample2.flac).
![widget](/static/blog/whisper-inference-endpoints/widget.png)
## 2. Integrate the Whisper endpoint into applications using Python and Javascript
[Hugging Face Inference Endpoints](https://huggingface.co/inference-endpoints) can directly work with binary data, meaning we can send our audio file as binary and get the transcription in return.
You can download the sample audio from here: [https://cdn-media.huggingface.co/speech_samples/sample2.flac](https://cdn-media.huggingface.co/speech_samples/sample2.flac).
### Python Example
For Python, we are going to use **`requests`** to send our requests and read our audio file from disk. (make your you have it installed **`pip install request`**). Make sure to replace `ENDPOINT_URL` and `HF_TOKEN` with your values.
```python
import json
from typing import List
import requests as r
import base64
import mimetypes
ENDPOINT_URL=""
HF_TOKEN=""
def predict(path_to_audio:str=None):
# read audio file
with open(path_to_audio, "rb") as i:
b = i.read()
# get mimetype
content_type= mimetypes.guess_type(path_to_audio)[0]
headers= {
"Authorization": f"Bearer {HF_TOKEN}",
"Content-Type": content_type
}
response = r.post(ENDPOINT_URL, headers=headers, data=b)
return response.json()
prediction = predict(path_to_audio="sample1.flac")
print(prediction)
```
expected prediction
```python
{"text": " going along slushy country roads and speaking to damp audiences in draughty school rooms day after day for a fortnight. He'll have to put in an appearance at some place of worship on Sunday morning, and he can come to us immediately afterwards."}
```
## Javascript
For Javascript, we create an HTML snippet, which you can run in the browser or using frontend frameworks like React, Svelte or Vue. The snippet is a minimal example of how we can add transcription features, in our application, by enabling the upload of audio files, which will then be transcribed. Make sure to replace `ENDPOINT_URL` and `HF_TOKEN` with your values.
```html
<div><label for="audio-upload">Upload an audio file:</label></div>
<div><input id="audio-upload" type="file" /></div>
<div>Transcription:</div>
<code id="transcripiton"></code>
<script>
const ENDPOINT_URL = ''
const HF_TOKEN = ''
function changeHandler({ target }) {
// Make sure we have files to use
if (!target.files.length) return
// create request object to send file to inference endpoint
const options = {
method: 'POST',
headers: {
Authorization: `Bearer ${HF_TOKEN}`,
'Content-Type': target.files[0].type,
},
body: target.files[0],
}
// send post request to inference endpoint
fetch(ENDPOINT_URL, options)
.then((response) => response.json())
.then((response) => {
// add
console.log(response.text)
const theDiv = document.getElementById('transcripiton')
const content = document.createTextNode(response.text)
theDiv.appendChild(content)
})
.catch((err) => console.error(err))
}
document.getElementById('audio-upload').addEventListener('change', changeHandler)
</script>
```
If you open this in the browser or with a server, you should see a plain HTML side with an input form, which you can use to upload an audio file.
![javascript](/static/blog/whisper-inference-endpoints/javascript.png)
## 3. Cost-performance comparison between Inference Endpoints and Amazon Transcribe and Google Cloud Speech-to-text
We managed to deploy and integrate OpenAI Whisper using Python or Javascript, but how do [Inference Endpoints](https://huggingface.co/inference-endpoints) compare to AI Services of Public clouds like [Amazon Transcribe](https://aws.amazon.com/transcribe/?nc1=h_ls) or [Google Cloud Speech-to-text](https://cloud.google.com/speech-to-text)?
For this, we took a look at the latency and cost of all three different options. For the cost, we compared two use cases, 1. real-time inference and 2. batch inference.
### Latency
To compare the latency, we have created 4 different Audio files with lengths of 1 minute, 10 minutes, and 25 minutes and created/transcribed them using each service. In the chart below, you can find the results of it.
For [Inference Endpoints](https://huggingface.co/inference-endpoints), we went with the GPU-small instance, which runs an NVIDIA T4 and `whisper-large-v2`. You can see the latency is on-par, if not better than the managed cloud services, but the transcription is better. _If you are interested in the raw transcriptions, let me know._
![latency](/static/blog/whisper-inference-endpoints/latency.png)
### Real-time inference cost
Real-time inference describes the workload where you want the prediction of your model as fast as possible to be able to work with the results of it. When looking at real-time inference, we need to be careful since [Amazon Transcribe](https://aws.amazon.com/transcribe/?nc1=h_ls) and [Google Cloud Speech-to-text](https://cloud.google.com/speech-to-text) are managed AI services where you pay only for what you use. With Inference Endpoints, you pay for the uptime of your endpoint based on the replicas.
For this scenario, we looked at how many hours of audio would be needed to achieve break-even with [Inference Endpoints](https://huggingface.co/inference-endpoints) to reduce your cost on real-time workloads.
![real-time-cost](/static/blog/whisper-inference-endpoints/real-time-cost.png)
[Amazon Transcribe](https://aws.amazon.com/transcribe/?nc1=h_ls) and [Google Cloud Speech-to-text](https://cloud.google.com/speech-to-text) cost the same and are represented as the red line in the chart. For [Inference Endpoints](https://huggingface.co/inference-endpoints), we looked at a CPU deployment and a GPU deployment. If you deploy Whisper large on a CPU, you will achieve break even after 121 hours of audio and for a GPU after 304 hours of audio data.
### Batch inference cost
Batch inference describes the workload where you run inference over a “dataset”/collect of files on a specific schedule or after a certain amount of time. You can use Batch inference with [Inference Endpoints](https://huggingface.co/inference-endpoints) by creating and deleting your endpoints when running your batch job.
For this scenario, we looked at how much it costs to transcribe 25 minutes of audio for each service. The results speak for themselves! With [Hugging Face Inference Endpoints](https://huggingface.co/inference-endpoints), you can save up to 96% when using batch processing. But you have to keep in mind that the start time/cold start for Inference Endpoints might be slower since you ll create resources.
![thumbnail](/static/blog/whisper-inference-endpoints/thumbnail.png)
---
Now, its your time be to integrate Whisper into your applications with [Inference Endpoints](https://huggingface.co/inference-endpoints).
Thanks for reading! If you have any questions, feel free to contact me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Getting Started with AutoML and AWS AutoGluon | https://www.philschmid.de/getting-started-with-automl-and-aws-autogluon | 2020-04-20 | [
"AWS",
"AutoML",
"Computer Vision"
] | Built an Object Detection Model with AWS AutoML library AutoGluon. | Google [CEO Sundar Pichai wrote](https://blog.google/technology/ai/making-ai-work-for-everyone/), “... _designing neural
nets is extremely time intensive, and requires an expertise that limits its use to a smaller community of scientists and
engineers._" Shortly after this Google launched its service AutoML in early 2018.
AutoML aims to enable developers with limited machine learning expertise to train high-quality models specific to their
business needs. The goal of AutoML is to automate all the major repetitive tasks such as
[feature selection](https://en.wikipedia.org/wiki/Feature_selection) or
[hyperparameter tuning](https://en.wikipedia.org/wiki/Hyperparameter_optimization). This allows creating more models in
less time with improved quality and accuracy.
A basic two step approach to machine learning: First, the model is created by fitting it to the data. Second, the model
is used to predict the output for new (previously unseen) data.
![machine-learning-process](/static/blog/getting-started-with-automl-and-aws-autogluon/ml-process.png)
This blog post demonstrates how to get started quickly with AutoML. It will give you a step-by-step tutorial on how to
built an Object Detection Model using AutoGluon, with top-notch accuracy. I created a
[Google Colab Notebook](https://colab.research.google.com/drive/1Z0F2FOowLWrJ-gYx72fiWLIpmEVMk4Bo) with a full example.
---
## AWS is entering the field of AutoML
At Re:Invent 2019 AWS launched a bunch on add-ons for there
[managed machine Learning service Sagemaker](https://aws.amazon.com/sagemaker/?nc1=h_ls) amongst other
"**[Sagemaker Autopilot](https://aws.amazon.com/sagemaker/autopilot/?nc1=h_ls)**". Sagemaker Autopilot is an AutoML
Service comparable to Google AutoML service.
In January 2020 Amazon Web Services Inc. (AWS) secretly launched an open-source library called
**[AutoGluon](https://autogluon.mxnet.io/)** the library behind Sagemaker Autopilot.
AutoGluon enables developers to write machine learning-based applications that use image, text or tabular data sets with
just a few lines of code.
![sagemaker-and-autogluon](/static/blog/getting-started-with-automl-and-aws-autogluon/autogluon-sagemaker.png)
With those tools, AWS has entered the field of managed AutoML Services or MLaas and to compete
[Google with its AutoML](https://cloud.google.com/automl?hl=de) service.
---
## What is AutoGluon?
"_[AutoGluon](https://autogluon.mxnet.io/index.html) enables easy-to-use and easy-to-extend AutoML with a focus on deep
learning and real-world applications spanning image, text, or tabular data. Intended for both ML beginners and experts,
AutoGluon enables you to... "_
- quickly prototype deep learning solutions
- automatic hyperparameter tuning, model selection / architecture search
- improve existing bespoke models and data pipelines \*\*
AutoGluon enables you to build machine learning models with only 3 Lines of Code.
```python
from autogluon import TabularPrediction as task
predictor = task.fit(train_data=task.Dataset(file_path="TRAIN_DATA.csv"), label="PREDICT_COLUMN")
predictions = predictor.predict(task.Dataset(file_path="TEST_DATA.csv"))
```
Currently, AutoGluon can create models for image classification, object detection, text classification, and supervised
learning with tabular datasets.
If you are interested in how AutoGluon is doing all the magic behind the scenes take a look at the
"[Machine learning with AutoGluon, an open source AutoML library](https://aws.amazon.com/blogs/opensource/machine-learning-with-autogluon-an-open-source-automl-library/)"
Post on the AWS Open Source Blog.
---
## Tutorial
We are going to build an Object Detection Model, to detect fruits (apple, orange and banana) on images. I built a small
dataset with around 300 images to achieve a quick training process.
[You can find the dataset here.](https://www.kaggle.com/philschmid/tiny-fruit-object-detection/metadata)
I am using Google Colab with a GPU runtime for this tutorial. If you are not sure how to use a GPU Runtime take a look
[here](https://www.philschmid.de/google-colab-the-free-gpu-tpu-jupyter-notebook-service).
Okay, now let's get started with the tutorial.
---
## Installing AutoGluon
AutoGluon offers different installation packages for different hardware preferences. For more installation instructions
take a look at the [AutoGluon Installation Guide here.](https://autogluon.mxnet.io/#installation)
The first step is to install `AutoGluon` with pip and CUDA support.
```python
# Here we assume CUDA 10.0 is installed. You should change the number
# according to your own CUDA version (e.g. mxnet-cu101 for CUDA 10.1).
!pip install --upgrade mxnet-cu100
!pip install autogluon
```
For AutoGluon to work in Google Colab, we also have to install `ipykernel` and restart the runtime.
```python
!pip install -U ipykernel
```
After a successful restart of your runtime you can import `autogluon` and print out the version.
```python
import autogluon as ag
from autogluon import ObjectDetection as task
print(ag.__version__)
# >>>> '0.0.6'
```
---
## Loading data and creating datasets
The next step is to load the dataset we use for the object detection task. In the `ObjectDetection` task from AutoGluon,
you can either use PASCAL VOC format or the COCO format by adjusting the `format` parameter of `Dataset()` to either
`coco` or `voc`. The
[Pascal VOC](https://gluon-cv.mxnet.io/build/examples_datasets/detection_custom.html#pascal-voc-like) Dataset contains
two directories: `Annotations` and `JPEGImages`. The
[COCO](https://www.immersivelimit.com/tutorials/create-coco-annotations-from-scratch/#coco-dataset-format) dataset is
formatted in `JSON` and is a collection of "info", "licenses", "images", "annotations", "categories".
For training, we are going to use the
[tiny_fruit_object_detection](https://www.kaggle.com/philschmid/tiny-fruit-object-detection/metadata) dataset, which I
build. The Dataset contains around 300 images of bananas, apples, oranges or a combination of them together.
We are using 240 images for training, 30 for testing and 30 for evaluating the model.
![sample-images](/static/blog/getting-started-with-automl-and-aws-autogluon/sample-images.png)
Using the commands below, we can `download` and `unzip` this dataset, which is only 29MB. After this we create our
`Dataset` for train and test with `task.Dataset()`.
```python
# download the data
root = './'
filename_zip = ag.download('https://philschmid-datasets.s3.eu-central-1.amazonaws.com/tiny_fruit.zip',
path=root)
# unzip data
filename = ag.unzip(filename_zip, root=root)
# create Dataset
data_root = os.path.join(root, filename)
# train dataset
dataset_train = task.Dataset(data_root, classes=('banana','apple','orange'),format='voc')
# test dataset
dataset_test = task.Dataset(data_root, index_file_name='test', classes=('banana','apple','orange'),format='voc')
```
---
## Training the Model
The third step is to train our model with the created `dataset`. In AutoGluon you define your classifier as variable,
here `detector` and define parameters in the `fit()` function during train-time. For example, you can define a
`time_limit` which automatically stops the training after a certain time. You can define a range for your own
`learning_rate` or set the number of `epochs`. One of the most important parameters is `num_trials`. This parameter
defines the maximum number of hyperparameter configurations to try out. You can find the full documentation of the
[configurable parameters here](https://autogluon.mxnet.io/api/autogluon.task.html#autogluon.task.ObjectDetection).
We are going to train our model for `20 epochs` and train 3 different models by setting `num_trials=3`.
```python
from autogluon import ObjectDetection as task
epochs = 20
detector = task.fit(dataset_train,
num_trials=3,
epochs=epochs,
lr=ag.Categorical(5e-4, 1e-4, 3e-4),
ngpus_per_trial=1)
```
As a result, we are getting a chart with the mean average precision (mAP) and the number of epochs. The mAP is a common
metric to calculate the accuracy of an object detection model.
Our best model (blue-line) achieved an mAP of `0.9198171507070327`
![result-chart](/static/blog/getting-started-with-automl-and-aws-autogluon/result-chart.png)
## Evaluating the Model
After finishing training, we are now going to evaluate/test the performance of our model on our `test` dataset
```python
test_map = detector.evaluate(dataset_test)
print(f"The mAP on the test dataset is {test_map[1][1]}")
```
The mAP on the test dataset is `0.8724113724113725` which is pretty good considering we only training with 240 Images
and 20 epochs.
---
## Predict an Image
To use our trained model for predicting you can simply run `detector.predict(image_path)`, which will return a tuple
(`ind`) containing the class-IDs of detected objects, the confidence-scores (`prob`), and the corresponding predicted
bounding box locations (`loc`).
```pthon
image = 'mixed_10.jpg'
image_path = os.path.join(data_root, 'JPEGImages', image)
ind, prob, loc = detector.predict(image_path)
```
![predict-result](/static/blog/getting-started-with-automl-and-aws-autogluon/predict-result.png)
## Save Model
_As of the time writing this article, saving an object detection model is not yet implemented in version `0.0.6`, but
will be in the next deployed version._
To save your model, you only have to run `detector.save()`
```python
savefile = 'model.pkl'
detector.save(savefile)
```
---
## Load Model
_As of the time writing this article, loading an object detection model is not yet implemented in version `0.0.6`, but
will be in the next deployed version._
```python
from autogluon import Detector
new_detector = Detector.load(savefile)
image = 'mixed_17.jpg'
image_path = os.path.join(data_root, 'JPEGImages', image)
detector.predict(image_path)
```
---
Thanks for reading. You can find the
[Google Colab Notebook](https://colab.research.google.com/drive/1Z0F2FOowLWrJ-gYx72fiWLIpmEVMk4Bo) containing a full
example [here](https://colab.research.google.com/drive/1Z0F2FOowLWrJ-gYx72fiWLIpmEVMk4Bo#scrollTo=XtuOeo_ZzLMq). |
Asynchronous Inference with Hugging Face Transformers and Amazon SageMaker | https://www.philschmid.de/sagemaker-huggingface-async-inference | 2022-02-15 | [
"HuggingFace",
"AWS",
"BERT",
"SageMaker"
] | Learn how to deploy an Asynchronous Inference model with Hugging Face Transformers and Amazon SageMaker, with autoscaling to zero. | Welcome to this getting started guide. We will use the Hugging Face Inference DLCs and Amazon SageMaker Python SDK to run an [Asynchronous Inference](https://docs.aws.amazon.com/sagemaker/latest/dg/async-inference.html) job.
Amazon SageMaker Asynchronous Inference is a new capability in SageMaker that queues incoming requests and processes them asynchronously. Compared to [Batch Transform](https://docs.aws.amazon.com/sagemaker/latest/dg/batch-transform.html) [Asynchronous Inference](https://docs.aws.amazon.com/sagemaker/latest/dg/async-inference.html) provides immediate access to the results of the inference job rather than waiting for the job to complete.
## How it works
Asynchronous inference endpoints have many similarities (and some key differences) compared to real-time endpoints. The process to create asynchronous endpoints is similar to real-time endpoints. You need to create: a model, an endpoint configuration, and an endpoint. However, there are specific configuration parameters specific to asynchronous inference endpoints, which we will explore below.
The Invocation of asynchronous endpoints differs from real-time endpoints. Rather than pass the request payload in line with the request, you upload the payload to Amazon S3 and pass an Amazon S3 URI as a part of the request. Upon receiving the request, SageMaker provides you with a token with the output location where the result will be placed once processed. Internally, SageMaker maintains a queue with these requests and processes them. During endpoint creation, you can optionally specify an Amazon SNS topic to receive success or error notifications. Once you receive the notification that your inference request has been successfully processed, you can access the result in the output Amazon S3 location.
![architecture](/static/blog/sagemaker-huggingface-async-inference/e2e.png)
Link to Notebook: [sagemaker/16_async_inference_hf_hub](https://github.com/huggingface/notebooks/blob/master/sagemaker/16_async_inference_hf_hub/sagemaker-notebook.ipynb)
_NOTE: You can run this demo in Sagemaker Studio, your local machine, or Sagemaker Notebook Instances_
## Development Environment and Permissions
### Installation
```python
%pip install sagemaker --upgrade
```
```python
import sagemaker
assert sagemaker.__version__ >= "2.75.0"
```
### Permissions
_If you are going to use Sagemaker in a local environment (not SageMaker Studio or Notebook Instances). You need access to an IAM Role with the required permissions for Sagemaker. You can find [here](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) more about it._
```python
import sagemaker
import boto3
sess = sagemaker.Session()
# sagemaker session bucket -> used for uploading data, models and logs
# sagemaker will automatically create this bucket if it not exists
sagemaker_session_bucket=None
if sagemaker_session_bucket is None and sess is not None:
# set to default bucket if a bucket name is not given
sagemaker_session_bucket = sess.default_bucket()
try:
role = sagemaker.get_execution_role()
except ValueError:
iam = boto3.client('iam')
role = iam.get_role(RoleName='sagemaker_execution_role')['Role']['Arn']
sess = sagemaker.Session(default_bucket=sagemaker_session_bucket)
print(f"sagemaker role arn: {role}")
print(f"sagemaker bucket: {sess.default_bucket()}")
print(f"sagemaker session region: {sess.boto_region_name}")
```
## Create Inference `HuggingFaceModel` for the Asynchronous Inference Endpoint
We use the [twitter-roberta-base-sentiment](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment) model running our async inference job. This is a RoBERTa-base model trained on ~58M tweets and finetuned for sentiment analysis with the TweetEval benchmark.
```python
from sagemaker.huggingface.model import HuggingFaceModel
from sagemaker.async_inference.async_inference_config import AsyncInferenceConfig
from sagemaker.s3 import s3_path_join
# Hub Model configuration. <https://huggingface.co/models>
hub = {
'HF_MODEL_ID':'cardiffnlp/twitter-roberta-base-sentiment',
'HF_TASK':'text-classification'
}
# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
env=hub, # configuration for loading model from Hub
role=role, # iam role with permissions to create an Endpoint
transformers_version="4.12", # transformers version used
pytorch_version="1.9", # pytorch version used
py_version='py38', # python version used
)
# create async endpoint configuration
async_config = AsyncInferenceConfig(
output_path=s3_path_join("s3://",sagemaker_session_bucket,"async_inference/output") , # Where our results will be stored
# notification_config={
# "SuccessTopic": "arn:aws:sns:us-east-2:123456789012:MyTopic",
# "ErrorTopic": "arn:aws:sns:us-east-2:123456789012:MyTopic",
# }, # Notification configuration
)
# deploy the endpoint endpoint
async_predictor = huggingface_model.deploy(
initial_instance_count=1,
instance_type="ml.g4dn.xlarge",
async_inference_config=async_config
)
```
We can find our Asynchronous Inference endpoint configuration in the Amazon SageMaker Console. Our endpoint now has type `async` compared to a' real-time' endpoint.
![deployed-endpoint](/static/blog/sagemaker-huggingface-async-inference/deployed_endpoint.png)
## Request Asynchronous Inference Endpoint using the `AsyncPredictor`
The `.deploy()` returns an `AsyncPredictor` object which can be used to request inference. This `AsyncPredictor` makes it easy to send asynchronous requests to your endpoint and get the results back. It has two methods: `predict()` and `predict_async()`. The `predict()` method is synchronous and will block until the inference is complete. The `predict_async()` method is asynchronous and will return immediately with the a `AsyncInferenceResponse`, which can be used to check for the result with polling. If the result object exists in that path, get and return the result.
### `predict()` request example
The `predict()` will upload our `data` to Amazon S3 and run inference against it. Since we are using `predict` it will block until the inference is complete.
```python
data = {
"inputs": [
"it 's a charming and often affecting journey .",
"it 's slow -- very , very slow",
"the mesmerizing performances of the leads keep the film grounded and keep the audience riveted .",
"the emotions are raw and will strike a nerve with anyone who 's ever had family trauma ."
]
}
res = async_predictor.predict(data=data)
print(res)
# [{'label': 'LABEL_2', 'score': 0.8808117508888245}, {'label': 'LABEL_0', 'score': 0.6126593947410583}, {'label': 'LABEL_2', 'score': 0.9425230622291565}, {'label': 'LABEL_0', 'score': 0.5511414408683777}]
```
### `predict_async()` request example
The `predict_async()` will upload our `data` to Amazon S3 and run inference against it. Since we are using `predict_async` it will return immediately with an `AsyncInferenceResponse` object.
In this example, we will loop over a `csv` file and send each line to the endpoint. After that we are going to poll the endpoint until the inference is complete.
The provided `tweet_data.csv` contains ~1800 tweets about different airlines.
But first, let's do a quick test to see if we can get a result from the endpoint using `predict_async`
#### Single `predict_async()` request example
```python
from sagemaker.async_inference.waiter_config import WaiterConfig
resp = async_predictor.predict_async(data={"inputs": "i like you. I love you"})
print(f"Response object: {resp}")
print(f"Response output path: {resp.output_path}")
print("Start Polling to get response:")
config = WaiterConfig(
max_attempts=5, # number of attempts
delay=10 # time in seconds to wait between attempts
)
resp.get_result(config)
```
#### High load `predict_async()` request example using a `csv` file
```python
from csv import reader
data_file="tweet_data.csv"
output_list = []
# open file in read mode
with open(data_file, 'r') as csv_reader:
for row in reader(csv_reader):
# send each row as async reuqest request
resp = async_predictor.predict_async(data={"inputs": row[0]})
output_list.append(resp)
print("All requests sent")
print(f"Output path list length: {len(output_list)}")
print(f"Output path list sample: {output_list[26].output_path}")
# iterate over list of output paths and get results
results = []
for async_response in output_list:
response = async_response.get_result(WaiterConfig())
results.append(response)
print(f"Results length: {len(results)}")
print(f"Results sample: {results[26]}")
```
## Autoscale (to Zero) the Asynchronous Inference Endpoint
Amazon SageMaker supports automatic scaling (autoscaling) your asynchronous endpoint. Autoscaling dynamically adjusts the number of instances provisioned for a model in response to changes in your workload. Unlike other hosted models Amazon SageMaker supports, with Asynchronous Inference, you can also scale down your asynchronous endpoints instances to zero.
**Prequistion**: You need to have an running Asynchronous Inference Endpoint up and running. You can check [Create Inference `HuggingFaceModel` for the Asynchronous Inference Endpoint](#create-inference-huggingfacemodel-for-the-asynchronous-inference-endpoint) to see how to create one.
If you want to learn more check-out [Autoscale an asynchronous endpoint](https://docs.aws.amazon.com/sagemaker/latest/dg/async-inference-autoscale.html) in the SageMaker documentation.
We are going to scale our asynchronous endpoint to 0-5 instances, which means that Amazon SageMaker will scale the endpoint to 0 instances after `600` seconds or 10 minutes to save you cost and scale up to 5 instances in `300` seconds steps getting more than `5.0` invocations.
```python
# application-autoscaling client
asg_client = boto3.client("application-autoscaling")
# This is the format in which application autoscaling references the endpoint
resource_id = f"endpoint/{async_predictor.endpoint_name}/variant/AllTraffic"
# Configure Autoscaling on asynchronous endpoint down to zero instances
response = asg_client.register_scalable_target(
ServiceNamespace="sagemaker",
ResourceId=resource_id,
ScalableDimension="sagemaker:variant:DesiredInstanceCount",
MinCapacity=0,
MaxCapacity=5,
)
response = asg_client.put_scaling_policy(
PolicyName=f'Request-ScalingPolicy-{async_predictor.endpoint_name}',
ServiceNamespace="sagemaker",
ResourceId=resource_id,
ScalableDimension="sagemaker:variant:DesiredInstanceCount",
PolicyType="TargetTrackingScaling",
TargetTrackingScalingPolicyConfiguration={
"TargetValue": 5.0,
"CustomizedMetricSpecification": {
"MetricName": "ApproximateBacklogSizePerInstance",
"Namespace": "AWS/SageMaker",
"Dimensions": [{"Name": "EndpointName", "Value": async_predictor.endpoint_name}],
"Statistic": "Average",
},
"ScaleInCooldown": 600, # duration until scale in begins (down to zero)
"ScaleOutCooldown": 300 # duration between scale out attempts
},
)
```
![scaling](/static/blog/sagemaker-huggingface-async-inference/scaling.png)
The Endpoint will now scale to zero after 600s. Let's wait until the endpoint is scaled to zero and then test sending requests and measure how long it takes to start an instance to process the requests. We are using the `predict_async()` method to send the request.
_**IMPORTANT: Since we defined the `TargetValue` to `5.0` the Async Endpoint will only start to scale out from 0 to 1 if you are sending more than 5 requests within 300 seconds.**_
```python
import time
start = time.time()
output_list=[]
# send 10 requests
for i in range(10):
resp = async_predictor.predict_async(data={"inputs": "it 's a charming and often affecting journey ."})
output_list.append(resp)
# iterate over list of output paths and get results
results = []
for async_response in output_list:
response = async_response.get_result(WaiterConfig(max_attempts=600))
results.append(response)
print(f"Time taken: {time.time() - start}s")
```
It took about 7-9 minutes to start an instance and to process the requests. This is perfect when you have non real-time critical applications, but want to save money.
![scale-out](/static/blog/sagemaker-huggingface-async-inference/scale-out.png)
### Delete the async inference endpoint & Autoscaling policy
```python
response = asg_client.deregister_scalable_target(
ServiceNamespace='sagemaker',
ResourceId=resource_id,
ScalableDimension='sagemaker:variant:DesiredInstanceCount'
)
async_predictor.delete_endpoint()
```
## Conclusion
We successfully deploy an Asynchronous Inference Endpoint to Amazon SageMaker using the SageMaker-Python SDK. The SageMaker SDK provides creating tooling for deploying and especially for running inference for the Asynchronous Inference Endpoint. It creates a nice `AsnycPredictor` object which can be used to send requests to the endpoint, which handles all of the boilperplate behind the scenes for asynchronous inference and gives us simple APIs.
In addition to this we were able to add autosclaing to the Asynchronous Inference Endpoint with `boto3` for scaling our endpoint in and out. Asynchronous Inference Endpoints can even scale down to zero, which is a great feature for non-real-time critical applications to save cost.
You should definitely try out Asynchronous Inference Endpoints for your own applications if neither `batch transform` nor `real-time` were the right option for you.
---
You can find the code [here](https://github.com/huggingface/notebooks/blob/master/sagemaker/16_async_inference_hf_hub/sagemaker-notebook.ipynb).
Thanks for reading! If you have any questions, feel free to contact me, through [Github](https://github.com/huggingface/transformers), or on the [forum](https://discuss.huggingface.co/c/sagemaker/17). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Hugging Face Transformers BERT fine-tuning using Amazon SageMaker and Training Compiler | https://www.philschmid.de/huggingface-amazon-sagemaker-training-compiler | 2021-12-07 | [
"AWS",
"BERT",
"HuggingFace",
"Sagemaker"
] | Learn how to Compile and fine-tune a Multi-Class Classification Transformers with `Trainer` and `emotion` dataset using Amazon SageMaker Training Compiler. | Last week at re:Invent 2021 [Swami Sivasubramanian](https://www.linkedin.com/in/swaminathansivasubramanian) has introducted the new [Amazon SageMaker Training Compiler](https://docs.aws.amazon.com/sagemaker/latest/dg/training-compiler.html), which optimizes DL models to accelerate training by more efficiently using SageMaker machine learning (ML) GPU instances. SageMaker Training Compiler is available at no additional charge within SageMaker and can help reduce total billable time as it accelerates training.
You can watch the [AWS re:Invent 2021 - Database, Analytics, and Machine Learning Keynote with Swami Sivasubramanian](https://www.youtube.com/watch?v=ue9aumC7AAk&ab_channel=AWSEvents) video to learn more about SageMaker Training Compiler and all Machine Learning related announcements.
Amazon SageMaker Training Compiler is integrated into the Hugging Face AWS Deep Learning Containers (DLCs).
In this blog post, we will use the Hugging Faces `transformers` and `datasets` library together with Amazon SageMaker and the new Amazon SageMaker Training Compiler to fine-tune a pre-trained transformer for multi-class text classification. In particular, the pre-trained model will be fine-tuned using the `emotion` dataset. To get started, we need to set up the environment with a few prerequisite steps, for permissions, configurations, and so on.
![emotion-widget.png](/static/blog/huggingface-amazon-sagemaker-training-compiler/emotion-widget.png)
_**NOTE: You can run this demo in Sagemaker Studio, your local machine or Sagemaker Notebook Instances**_
## Development Environment and Permissions
_*Note:* we only install the required libraries from Hugging Face and AWS. You also need PyTorch or Tensorflow, if you haven´t it installed_
```python
!pip install "sagemaker>=2.70.0" "transformers==4.11.0" --upgrade
# using older dataset due to incompatibility of sagemaker notebook & aws-cli with > s3fs and fsspec to >= 2021.10
!pip install "datasets==1.13" --upgrade
```
```python
import sagemaker
assert sagemaker.__version__ >= "2.70.0"
```
```python
import sagemaker.huggingface
```
## Permissions
_If you are going to use Sagemaker in a local environment. You need access to an IAM Role with the required permissions for Sagemaker. You can find [here](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) more about it._
```python
import sagemaker
sess = sagemaker.Session()
# sagemaker session bucket -> used for uploading data, models and logs
# sagemaker will automatically create this bucket if it not exists
sagemaker_session_bucket=None
if sagemaker_session_bucket is None and sess is not None:
# set to default bucket if a bucket name is not given
sagemaker_session_bucket = sess.default_bucket()
role = sagemaker.get_execution_role()
sess = sagemaker.Session(default_bucket=sagemaker_session_bucket)
print(f"sagemaker role arn: {role}")
print(f"sagemaker bucket: {sess.default_bucket()}")
print(f"sagemaker session region: {sess.boto_region_name}")
```
# Preprocessing
We are using the `datasets` library to download and preprocess the `emotion` dataset. After preprocessing, the dataset will be uploaded to our `sagemaker_session_bucket` to be used within our training job. The [emotion](https://github.com/dair-ai/emotion_dataset) dataset consists of 16000 training examples, 2000 validation examples, and 2000 testing examples.
## Tokenization
```python
from datasets import load_dataset
from transformers import AutoTokenizer
# tokenizer used in preprocessing
model_id = 'bert-base-uncased'
# dataset used
dataset_name = 'emotion'
# s3 key prefix for the data
s3_prefix = 'samples/datasets/emotion'
```
```python
# download tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id)
# tokenizer helper function
def tokenize(batch):
return tokenizer(batch['text'], padding='max_length', truncation=True)
# load dataset
train_dataset, test_dataset = load_dataset(dataset_name, split=['train', 'test'])
# tokenize dataset
train_dataset = train_dataset.map(tokenize, batched=True)
test_dataset = test_dataset.map(tokenize, batched=True)
# set format for pytorch
train_dataset = train_dataset.rename_column("label", "labels")
train_dataset.set_format('torch', columns=['input_ids', 'attention_mask', 'labels'])
test_dataset = test_dataset.rename_column("label", "labels")
test_dataset.set_format('torch', columns=['input_ids', 'attention_mask', 'labels'])
```
## Uploading data to `sagemaker_session_bucket`
After we processed the `datasets` we are going to use the new `FileSystem` [integration](https://huggingface.co/docs/datasets/filesystems.html) to upload our dataset to S3.
```python
import botocore
from datasets.filesystems import S3FileSystem
s3 = S3FileSystem()
# save train_dataset to s3
training_input_path = f's3://{sess.default_bucket()}/{s3_prefix}/train'
train_dataset.save_to_disk(training_input_path, fs=s3)
# save test_dataset to s3
test_input_path = f's3://{sess.default_bucket()}/{s3_prefix}/test'
test_dataset.save_to_disk(test_input_path, fs=s3)
```
## Creating an Estimator and start a training job
The Amazon Training Compiler works best with Encoder Type models, like `BERT`, `RoBERTa`, `ALBERT`, `DistilBERT`.
The Model compilation using Amazon SageMaker Training compiler increases efficiency and lowers the memory footprint of your Transformers model, which allows larger batch sizes and more efficient and faster training.
We tested long classification tasks with `BERT`, `DistilBERT` and `RoBERTa and achieved up 33% higher batch sizes and 1.4x faster Training. For best performance, set batch size to a multiple of 8.
The longer your training job, the larger the benefit of using Amazon SageMaker Training Compiler. 30 minutes seems to be the sweet spot to offset model compilation time in the beginning of your training. Initial pre-training jobs are excellent candidates for using the new Amazon SageMaker Training Compiler.
```python
from sagemaker.huggingface import HuggingFace, TrainingCompilerConfig
# initialize the Amazon Training Compiler
compiler_config=TrainingCompilerConfig()
# hyperparameters, which are passed into the training job
hyperparameters={'epochs': 4, # number of training epochs
'train_batch_size': 24, # batch size for training
'eval_batch_size': 32, # batch size for evaluation
'learning_rate': 3e-5, # learning rate used during training
'model_id':model_id, # pre-trained model
'fp16': True, # Whether to use 16-bit (mixed) precision training
}
# job name for sagemaker training
job_name=f"training-compiler-{hyperparameters['model_id']}-{dataset_name}"
```
Create a `SageMakerEstimator` with the `SageMakerTrainingCompiler` and the `hyperparemters`, instance configuration and training script.
```python
# create the Estimator
huggingface_estimator = HuggingFace(
entry_point = 'train.py', # fine-tuning script used in training jon
source_dir = './scripts', # directory where fine-tuning script is stored
instance_type = 'ml.p3.2xlarge', # instances type used for the training job
instance_count = 1, # the number of instances used for training
base_job_name = job_name, # the name of the training job
role = role, # Iam role used in training job to access AWS ressources, e.g. S3
transformers_version = '4.11.0', # the transformers version used in the training job
pytorch_version = '1.9.0', # the pytorch_version version used in the training job
py_version = 'py38', # the python version used in the training job
hyperparameters = hyperparameters, # the hyperparameter used for running the training job
compiler_config = compiler_config, # the compiler configuration used in the training job
disable_profiler = True, # whether to disable the profiler during training used to gain maximum performance
debugger_hook_config = False, # whether to enable the debugger hook during training used to gain maximum performance
)
```
Start the training with the uploaded datsets on s3 with `huggingface_estimator.fit()`.
```python
# define a data input dictonary with our uploaded s3 uris
data = {
'train': training_input_path,
'test': test_input_path
}
# starting the train job with our uploaded datasets as input
huggingface_estimator.fit(data)
```
## Deploying the endpoint
To deploy our endpoint, we call `deploy()` on our HuggingFace estimator object, passing in our desired number of instances and instance type.
```python
predictor = huggingface_estimator.deploy(1,"ml.g4dn.xlarge")
```
Then, we use the returned predictor object to call the endpoint.
```python
sentiment_input= {"inputs": "Winter is coming and it will be dark soon."}
predictor.predict(sentiment_input)
```
Finally, we delete the inference endpoint.
```python
predictor.delete_endpoint()
```
## Conclusion
With the Amazon SageMaker Training Compiler you can accelerate your training by 1.4x without any code changes required. When you are fine-tuning Transformer models longer than 30 minutes and are using on of the currently compatible model architecture and tasks it is a no brainer to use the new Training Compiler to speed up your training and save costs.
---
You can find the code [here](https://github.com/huggingface/notebooks/blob/master/sagemaker/15_training_compiler/sagemaker-notebook.ipynb) and feel free open a thread the [forum](https://discuss.huggingface.co/).
Thanks for reading. If you have any questions, feel free to contact me, through [Github](https://github.com/huggingface/transformers), or on the [forum](https://discuss.huggingface.co/c/sagemaker/17). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Few-shot learning in practice with GPT-Neo | https://www.philschmid.de/few-shot-learning-gpt-neo | 2021-06-05 | [
"HuggingFace",
"GPT-Neo"
] | The latest developments in NLP show that you can overcome this limitation by providing a few examples at inference time with a large language model - a technique known as Few-Shot Learning. In this blog post, we'll explain what Few-Shot Learning is, and explore how a large language model called GPT-Neo. | > [Cross post from huggingface.co/blog](https://huggingface.co/blog/few-shot-learning-gpt-neo-and-inference-api)
In many Machine Learning applications, the amount of available labeled data is a barrier to producing a high-performing model. The latest developments in NLP show that you can overcome this limitation by providing a few examples at inference time with a large language model - a technique known as Few-Shot Learning. In this blog post, we'll explain what Few-Shot Learning is, and explore how a large language model called GPT-Neo, and the 🤗 Accelerated Inference API, can be used to generate your own predictions.
<script
defer
src="https://gpt-neo-accelerated-inference-api.s3-eu-west-1.amazonaws.com/fewShotInference.js"
></script>
<few-shot-inference-widget ></few-shot-inference-widget>
## What is Few-Shot Learning?
Few-Shot Learning refers to the practice of feeding a machine learning model with a very small amount of training data to guide its predictions, like a few examples at inference time, as opposed to standard fine-tuning techniques which require a relatively large amount of training data for the pre-trained model to adapt to the desired task with accuracy.
This technique has been mostly used in computer vision, but with some of the latest Language Models, like [EleutherAI GPT-Neo](https://www.eleuther.ai/projects/gpt-neo/) and [OpenAI GPT-3](https://openai.com/blog/gpt-3-apps/), we can now use it in Natural Language Processing (NLP).
In NLP, Few-Shot Learning can be used with Large Language Models, which have learned to perform a wide number of tasks implicitly during their pre-training on large text datasets. This enables the model to generalize, that is to understand related but previously unseen tasks, with just a few examples.
Few-Shot NLP examples consist of three main components:
- **Task Description**: A short description of what the model should do, e.g. "Translate English to French"
- **Examples**: A few examples showing the model what it is expected to predict, e.g. "sea otter => loutre de mer"
- **Prompt**: The beginning of a new example, which the model should complete by generating the missing text, e.g. "cheese => "
![few-shot-prompt](/static/blog/few-shot-learning-gpt-neo/few-shot-prompt.png)
<small>
Image from{' '}
<a href="https://arxiv.org/abs/2005.14165" target="_blank">
Language Models are Few-Shot Learners
</a>
</small>
Creating these few-shot examples can be tricky, since you need to articulate the “task” you want the model to perform through them. A common issue is that models, especially smaller ones, are very sensitive to the way the examples are written.
An approach to optimize Few-Shot Learning in production is to learn a common representation for a task and then train task-specific classifiers on top of this representation.
OpenAI showed in the [GPT-3 Paper](https://arxiv.org/abs/2005.14165) that the few-shot prompting ability improves with the number of language model parameters.
![few-shot-performance](/static/blog/few-shot-learning-gpt-neo/few-shot-performance.png)
<small>
Image from{' '}
<a href="https://arxiv.org/abs/2005.14165" target="_blank">
Language Models are Few-Shot Learners
</a>
</small>
Let's now take a look at how at how GPT-Neo and the 🤗 Accelerated Inference API can be used to generate your own Few-Shot Learning predictions!
---
## What is GPT-Neo?
GPT-Neo is a family of transformer-based language models from [EleutherAI](https://www.eleuther.ai/projects/gpt-neo/) based on the GPT architecture. [EleutherAI](https://www.eleuther.ai)'s primary goal is to train a model that is equivalent in size to GPT-3 and make it available to the public under an open license.
All of the currently available GPT-Neo checkpoints are trained with the Pile dataset, a large text corpus that is extensively documented in ([Gao et al., 2021](https://arxiv.org/abs/2101.00027)). As such, it is expected to function better on the text that matches the distribution of its training text; we recommend keeping this in mind when designing your examples.
---
## 🤗 Accelerated Inference API
The [Accelerated Inference API](https://huggingface.co/inference-api) is our hosted service to run inference on any of the 10,000+ models publicly available on the 🤗 Model Hub, or your own private models, via simple API calls. The API includes acceleration on CPU and GPU with [up to 100x speedup](https://huggingface.co/blog/accelerated-inference) compared to out of the box deployment of Transformers.
To integrate Few-Shot Learning predictions with `GPT-Neo` in your own apps, you can use the 🤗 Accelerated Inference API with the code snippet below. You can find your API Token [here](https://huggingface.co/settings/token), if you don't have an account you can get started [here](https://huggingface.co/pricing).
```python
import json
import requests
API_TOKEN = ""
def query(payload='',parameters=None,options={'use_cache': False}):
API_URL = "https://api-inference.huggingface.co/models/EleutherAI/gpt-neo-2.7B"
headers = {"Authorization": f"Bearer {API_TOKEN}"}
body = {"inputs":payload,'parameters':parameters,'options':options}
response = requests.request("POST", API_URL, headers=headers, data= json.dumps(body))
try:
response.raise_for_status()
except requests.exceptions.HTTPError:
return "Error:"+" ".join(response.json()['error'])
else:
return response.json()[0]['generated_text']
parameters = {
'max_new_tokens':25, # number of generated tokens
'temperature': 0.5, # controlling the randomness of generations
'end_sequence': "###" # stopping sequence for generation
}
prompt="...." # few-shot prompt
data = query(prompt,parameters,options)
```
---
## Practical Insights
Here are some practical insights, which help you get started using `GPT-Neo` and the 🤗 Accelerated Inference API.
Since `GPT-Neo` (2.7B) is about 60x smaller than `GPT-3` (175B), it does not generalize so well to zero-shot problems and needs 3-4 examples to achieve good results. When you provide more examples `GPT-Neo` understands the task and takes the `end_sequence` into account, which allows us to control the generated text pretty well.
![insights-benefit-of-examples](/static/blog/few-shot-learning-gpt-neo/insights-benefit-of-examples.png)
The hyperparameter `End Sequence`, `Token Length` & `Temperature` can be used to control the `text-generation` of the model and you can use this to your advantage to solve the task you need. The `Temperature` controlls the randomness of your generations, lower temperature results in less random generations and higher temperature results in more random generations.
![insights-benefit-of-hyperparameter](/static/blog/few-shot-learning-gpt-neo/insights-benefit-of-hyperparameter.png)
In the example, you can see how important it is to define your hyperparameter. These can make the difference between solving your task or failing miserably.
---
To use GPT-Neo or any Hugging Face model in your own application, you can [start a free trial](https://huggingface.co/pricing) of the 🤗 Accelerated Inference API.
If you need help mitigating bias in models and AI systems, or leveraging Few-Shot Learning, the 🤗 Expert Acceleration Program can [offer your team direct premium support from the Hugging Face team](https://huggingface.co/support). |
Custom Inference with Hugging Face Inference Endpoints | https://www.philschmid.de/custom-inference-handler | 2022-09-29 | [
"Inference",
"HuggingFace",
"BERT"
] | Welcome to this tutorial on how to create a custom inference handler for Hugging Face Inference Endpoints. | Welcome to this tutorial on how to create a custom inference handler for [Hugging Face Inference Endpoints](https://huggingface.co/inference-endpoints).
The tutorial will cover how to extend a default transformers pipeline with custom business logic, customize the request & response body, and add additional Python dependencies.
You will learn how to:
1. [Set up Development Environment](#1-set-up-development-environment)
2. [Create a base `EndpointHandler` class](#2-create-a-base-endpointhandler-class)
3. [Customize `EndpointHandler`](#3-customize-endpointhandler)
4. [Test `EndpointHandler`](#4-test-endpointhandler)
5. [Push the custom handler to the hugging face repository](#5-push-the-custom-handler-to-the-hugging-face-repository)
6. [Deploy the custom handler as an Inference Endpoint](#6-deploy-the-custom-handler-as-an-inference-endpoint)
Let's get started! 🚀
## What is Hugging Face Inference Endpoints?
🤗 Inference Endpoints offers a secure production solution to easily deploy Machine Learning models on dedicated and autoscaling infrastructure managed by Hugging Face.
A Hugging Face Inference Endpoint is built from a Hugging Face Model Repository](https://huggingface.co/models). It supports all of the [Transformers and Sentence-Transformers tasks](https://huggingface.co/docs/inference-endpoints/supported_tasks) and any arbitrary ML Framework through easy customization by adding a [custom inference handler.](https://huggingface.co/docs/inference-endpoints/guides/custom_handler)
This [custom inference handler](https://huggingface.co/docs/inference-endpoints/guides/custom_handler) can be used to implement simple inference pipelines for ML Frameworks like Keras, Tensorflow, and scit-kit learn or can be used to add custom business logic to your existing transformers pipeline.
## Tutorial: Create a custom inference handler
Creating a custom Inference handler for Hugging Face Inference endpoints is super easy you only need to add a `handler.py` in the model repository you want to deploy which implements an `EndpointHandler` class with an `__init__` and a `__call__` method.
We will use the [philschmid/distilbert-base-uncased-emotion](https://huggingface.co/philschmid/distilbert-base-uncased-emotion) repository in the tutorial. The repository includes a DistilBERT model fine-tuned to detect emotions in the tutorial. We will create a custom handler which:
- customizes the request payload to add a date field
- add an external package to check if the date is a holiday
- add a custom post-processing step to check whether the input date is a holiday. If the date is a holiday we will fix the emotion to “happy” - since everyone is happy when there are holidays 🌴🎉😆
---
Before we can get started make sure you meet all of the following requirements:
1. A Hugging Face model repository with your model weights
2. An Organization/User with an active plan and *WRITE*access to the model repository.
_You can access Inference Endpoints with a **PRO** user account or a Lab organization with a credit card on file. [Check out our plans](https://huggingface.co/pricing)._
If you want to create a Custom Handler for an existing model from the community, you can use the [repo_duplicator](https://huggingface.co/spaces/osanseviero/repo_duplicator) to create a repository fork, which you can then use to add your `handler.py`.
### 1. Set up Development Environment
The easiest way to develop our custom handler is to set up a local development environment, to implement, test, and iterate there, and then deploy it as an Inference Endpoint. The first step is to install all required development dependencies.
_needed to create the custom handler, not needed for inference_
```bash
# install git-lfs to interact with the repository
sudo apt-get update
sudo apt-get install git-lfs
# install transformers
pip install transformers[sklearn,sentencepiece,audio,vision]
```
After we have installed our libraries we will clone our repository to our development environment.
```bash
git lfs install
git clone https://huggingface.co/philschmid/distilbert-base-uncased-emotion
```
To be able to push our custom handler, we need to login into our HF account. This can be done by using the `huggingface-cli`.
```bash
# setup cli with token
huggingface-cli login
git config --global credential.helper store
```
### 2. Create a base `EndpointHandler` class
After we have set up our environment, we can start creating your custom handler. The custom handler is a Python class (`EndpointHandler`) inside a `handler.py` file in our repository. The `EndpointHandler` needs to implement an `__init__` and a `__call__` method.
- The `__init__` method will be called when starting the Endpoint and will receive 1 argument, a string with the path to your model weights. This allows you to load your model correctly.
- The `__call__` method will be called on every request and receive a dictionary with your request body as a python dictionary. It will always contain the `inputs` key.
The first step is to create our `handler.py` in the local clone of our repository.
```bash
cd distilbert-base-uncased-emotion && touch handler.py
```
Next, we add the `EndpointHandler` class with the `__init__` and `__call__` method.
```python
from typing import Dict, List, Any
class EndpointHandler():
def __init__(self, path=""):
# Preload all the elements you are going to need at inference.
# pseudo
# self.model = load_model(path)
def __call__(self, data: Dict[str, Any]) -> List[Dict[str, Any]]:
"""
data args:
inputs (:obj: `str` | `PIL.Image` | `np.array`)
kwargs
Return:
A :obj:`list` | `dict`: will be serialized and returned
"""
# pseudo
# self.model(input)
```
### 3. Customize `EndpointHandler`
The third step is to add the custom logic we want to use during initialization (`__init__`) or inference (`__call__`). You can already find multiple [Custom Handler on the Hub](https://huggingface.co/models?other=endpoints-template) if you need some inspiration.
First, we need to create a new `requirements.txt`, add our [holiday detection package](https://pypi.org/project/holidays/), and ensure we have it installed in our development environment.
```bash
# add packages to requirements.txt
echo "holidays" >> requirements.txt
# install in local environment
pip install -r requirements.txt
```
Next, we must adjust our `handler.py` and `EndpointHandler` to match our condition.
```python
from typing import Dict, List, Any
from transformers import pipeline
import holidays
class EndpointHandler():
def __init__(self, path=""):
self.pipeline = pipeline("text-classification",model=path)
self.holidays = holidays.US()
def __call__(self, data: Dict[str, Any]) -> List[Dict[str, Any]]:
"""
data args:
inputs (:obj: `str`)
date (:obj: `str`)
Return:
A :obj:`list` | `dict`: will be serialized and returned
"""
# get inputs
inputs = data.pop("inputs",data)
date = data.pop("date", None)
# check if date exists and if it is a holiday
if date is not None and date in self.holidays:
return [{"label": "happy", "score": 1}]
# run normal prediction
prediction = self.pipeline(inputs)
return prediction
```
### 4. Test `EndpointHandler`
We can test our `EndpointHandler` by importing it in another file/notebook, creating an instance of it, and then testing it by sending a prepared payload.
```python
from handler import EndpointHandler
# init handler
my_handler = EndpointHandler(path=".")
# prepare sample payload
non_holiday_payload = {"inputs": "I am quite excited how this will turn out", "date": "2022-08-08"}
holiday_payload = {"inputs": "Today is a though day", "date": "2022-07-04"}
# test the handler
non_holiday_pred=my_handler(non_holiday_payload)
holiday_payload=my_handler(holiday_payload)
# show results
print("non_holiday_pred", non_holiday_pred)
print("holiday_payload", holiday_payload)
# non_holiday_pred [{'label': 'joy', 'score': 0.9985942244529724}]
# holiday_payload [{'label': 'happy', 'score': 1}]
```
It works!!!! 🎉
_Note: If you are using a notebook, you might have to restart your kernel when you make changes to the handler.py since it is not automatically re-imported._
### 5. Push the custom handler to the hugging face repository
After successfully testing our handler locally, we can push it to your repository using basic git commands.
```bash
# add all our new files
git add requirements.txt handler.py
# commit our files
git commit -m "add custom handler"
# push the files to the hub
git push
```
Now, you should see your `handler.py` and `requirements.txt` in your repository in the [“Files and version”](https://huggingface.co/philschmid/distilbert-base-uncased-emotion/tree/main) tab.
### 6. Deploy the custom handler as an Inference Endpoint
The last step is to deploy our custom handler as an Inference Endpoint. We can deploy our custom Custom Handler the same way as a regular Inference Endpoint.
Select the repository, the cloud, and the region, adjust the instance and security settings, and deploy.
![repository](/static/blog/custom-inference-handler/repository.png)
The Inference Endpoint Service will check during the creation of your Endpoint if there is a `handler.py` available and valid and will use it for serving requests no matter which “Task” you select.
_Note: If you modify the payload, e.g., adding a field, select “Custom” as the task in the advanced configuration. This will replace the inference widget with the custom Inference widget._
![task](/static/blog/custom-inference-handler/task.png)
After deploying our endpoint, we can test it using the inference widget. Since we have a `Custom` task, we have to provide a raw JSON as input.
![widget](/static/blog/custom-inference-handler/widget.png)
## Conclusion
That's it we successfully created and deployed a custom inference handler to Hugging Face Inference Endpoints in 6 simple steps in less than 30 minutes.
To underline this again, we created a managed, secure, scalable inference endpoint that runs our custom handler, including our custom logic. We only needed to create our handler, define two methods, and then create our Endpoint through the UI.
This will allow Data scientists and Machine Learning Engineers to focus on R&D, improving the model rather than fiddling with MLOps topics.
Now, it's your turn! [Sign up](https://ui.endpoints.huggingface.co/new) and create your custom handler within a few minutes!
---
Thanks for reading. If you have any questions, contact me via [email](mailto:philipp@huggingface.co) or [forum](https://discuss.huggingface.co/c/inference-endpoints/64). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Fine-tune FLAN-T5 XL/XXL using DeepSpeed & Hugging Face Transformers | https://www.philschmid.de/fine-tune-flan-t5-deepspeed | 2023-02-16 | [
"T5",
"DeepSpeed",
"HuggingFace",
"Summarization"
] | Learn how to fine-tune Google's FLAN-T5 XXL using DeepSpeed & Hugging Face Transformers. | FLAN-T5, released with the [Scaling Instruction-Finetuned Language Models](https://arxiv.org/pdf/2210.11416.pdf) paper, is an enhanced version of T5 that has been fine-tuned in a mixture of tasks, or simple words, a better T5 model in any aspect. FLAN-T5 outperforms T5 by double-digit improvements for the same number of parameters. Google has open sourced [5 checkpoints available on Hugging Face](https://huggingface.co/models?other=arxiv:2210.11416) ranging from 80M parameter up to 11B parameter.
In a previous blog post, we already learned how to [“Fine-tune FLAN-T5 for chat & dialogue summarization”](https://www.philschmid.de/fine-tune-flan-t5) using [the base version (250M parameter)](https://huggingface.co/google/flan-t5-base) of the model. In this blog post, we look into how we can scale the training from the Base version to the [XL (3B)](https://huggingface.co/google/flan-t5-xl) or [XXL (11B)](https://huggingface.co/google/flan-t5-xxl).
This means we will learn how to fine-tune FLAN-T5 XL & XXL using model parallelism, multiple GPUs, and [DeepSpeed ZeRO](https://www.deepspeed.ai/tutorials/zero/).
You will learn about the following:
1. [What is DeepSpeed ZeRO?](#1-what-is-deepspeed-zero)
2. [Fine-tune FLAN-T5-XXL using Deepspeed](#2-fine-tune-flan-t5-xxl-using-deepspeed)
3. [Results & Experiments](#3-results--experiments)
in addition to the tutorial, we have run a series of experiments to help you choose the right hardware setup. You can find the details in the Results & Experiments section.
Let's get started! 🚀
## 1. What is DeepSpeed ZeRO?
[DeepSpeed ZeRO](https://arxiv.org/abs/2101.06840) is part of the [DeepSpeed Training Pillar](https://www.deepspeed.ai/training/), which focus on efficient large-scale Training of Transformer models. DeepSpeed ZeRO or Zero Redundancy Optimizer is a method to reduce the memory footprint. Compared to basic data parallelism, ZeRO partitions optimizer states, gradients, and model parameters to save significant memory across multiple devices.
![deepspeed zero](/static/blog/fine-tune-flan-t5-deepspeed/deepspeed.png)
If you want to learn more about DeepSpeed ZeRO, checkout: [ZeRO & DeepSpeed: New system optimizations enable training models with over 100 billion parameters](https://www.microsoft.com/en-us/research/blog/zero-deepspeed-new-system-optimizations-enable-training-models-with-over-100-billion-parameters/)
DeepSpeed ZeRO is natively integrated into the [Hugging Face Transformers Trainer](https://huggingface.co/docs/transformers/v4.26.1/en/main_classes/deepspeed). The integration enables leveraging ZeRO by simply providing a DeepSpeed config file, and the Trainer takes care of the rest.
Excerpt: [DeepSpeed ZeRO-offload](https://www.deepspeed.ai/tutorials/zero-offload/)
DeepSpeed ZeRO not only allows us to parallelize our models on multiple GPUs, it also implements Offloading. [ZeRO-Offload](https://arxiv.org/abs/2101.06840) implements optimizations that offload optimizer and model to the CPU to train larger models on the given GPUs, e.g. [10B parameter GPT-2 on a single V100 GPU.](https://www.deepspeed.ai/tutorials/zero-offload/#training-a-10b-parameter-gpt-2-on-a-single-v100-gpu) We used ZeRO-offload for the experiments but will not use it in the tutorial.
## 2. Fine-tune FLAN-T5-XXL using Deepspeed
We now know that we can use DeepSpeed ZeRO together with Hugging Face Transformers to easily scale our hardware in cases where the model no longer fits on GPU. That's exactly what we need to solve since the FLAN-T5-XXL weights in fp32 are already 44GB big. This makes it almost impossible to fit on a single GPU when adding activations and optimizer states.
In this tutorial, we cover how to fine-tune [FLAN-T5-XXL](https://huggingface.co/google/flan-t5-xxl) (11B version) on the [CNN Dailymail Dataset](https://huggingface.co/datasets/cnn_dailymail) for news summarization. The provided script and pre-processing can easily be adjusted to fine-tune FLAN-T5-XL and use a different dataset.
_Note: This tutorial was created and run on a p4dn.24xlarge AWS EC2 Instance including 8x NVIDIA A100 40GB._
### Setup Development Environment
The first step is to install the Hugging Face Libraries, including transformers and datasets, and DeepSeed. Running the following cell will install all the required packages.
```bash
# install torch with the correct cuda version, check nvcc --version
pip install torch --extra-index-url https://download.pytorch.org/whl/cu116 --upgrade
# install Hugging Face Libraries
pip install "transformers==4.26.0" "datasets==2.9.0" "accelerate==0.16.0" "evaluate==0.4.0" --upgrade
# install deepspeed and ninja for jit compilations of kernels
pip install "deepspeed==0.8.0" ninja --upgrade
# install additional dependencies needed for training
pip install rouge-score nltk py7zr tensorboard
```
### Load and prepare dataset
Similar to the [“Fine-tune FLAN-T5 for chat & dialogue summarization”](https://www.philschmid.de/fine-tune-flan-t5) we need to prepare a dataset to fine-tune our model. As mentioned in the beginning, we will fine-tune [FLAN-T5-XXL](https://huggingface.co/google/flan-t5-xxl) on the [CNN Dailymail Dataset](https://huggingface.co/datasets/cnn_dailymail). The blog post is not going into detail about the dataset generation. If you want to learn the detailed steps check out the [previous post](https://www.philschmid.de/fine-tune-flan-t5).
We define some parameters, which we use throughout the whole example, feel free to adjust it to your needs.
```python
# experiment config
model_id = "google/flan-t5-xxl" # Hugging Face Model Id
dataset_id = "cnn_dailymail" # Hugging Face Dataset Id
dataset_config = "3.0.0" # config/verison of the dataset
save_dataset_path = "data" # local path to save processed dataset
text_column = "article" # column of input text is
summary_column = "highlights" # column of the output text
# custom instruct prompt start
prompt_template = f"Summarize the following news article:\n{{input}}\nSummary:\n"
```
Compared to the [previous example](https://www.philschmid.de/fine-tune-flan-t5), we are splitting the processing and training into two separate paths. This allows you to run the preprocessing outside of the GPU instance. We process (tokenize) the dataset and save it to disk and then load in our train script from disk again.
```python
from datasets import load_dataset
from transformers import AutoTokenizer
import numpy as np
# Load dataset from the hub
dataset = load_dataset(dataset_id,name=dataset_config)
# Load tokenizer of FLAN-t5-base
tokenizer = AutoTokenizer.from_pretrained(model_id)
print(f"Train dataset size: {len(dataset['train'])}")
print(f"Test dataset size: {len(dataset['test'])}")
# Train dataset size: 287113
# Test dataset size: 11490
```
We defined a `prompt_template` in our config, which we will use to construct an instruct prompt for better performance of our model. Our `prompt_template` has a “fixed” start and end, and our document is in the middle. This means we need to ensure that the “fixed” template parts + document are not exceeding the max length of the model. Therefore we calculate the max length of our document, which we will later use for padding and truncation
```python
prompt_lenght = len(tokenizer(prompt_template.format(input=""))["input_ids"])
max_sample_length = tokenizer.model_max_length - prompt_lenght
print(f"Prompt length: {prompt_lenght}")
print(f"Max input length: {max_sample_length}")
# Prompt length: 12
# Max input length: 500
```
We know now that our documents can be “500” tokens long to fit our `template_prompt` still correctly. In addition to our input, we need to understand better our “target” sequence length meaning and how long are the summarization ins our dataset. Therefore we iterate over the dataset and calculate the max input length (at max 500) and the max target length. (takes a few minutes)
```python
from datasets import concatenate_datasets
import numpy as np
# The maximum total input sequence length after tokenization.
# Sequences longer than this will be truncated, sequences shorter will be padded.
tokenized_inputs = concatenate_datasets([dataset["train"], dataset["test"]]).map(lambda x: tokenizer(x[text_column], truncation=True), batched=True, remove_columns=[text_column, summary_column])
max_source_length = max([len(x) for x in tokenized_inputs["input_ids"]])
max_source_length = min(max_source_length, max_sample_length)
print(f"Max source length: {max_source_length}")
# The maximum total sequence length for target text after tokenization.
# Sequences longer than this will be truncated, sequences shorter will be padded."
tokenized_targets = concatenate_datasets([dataset["train"], dataset["test"]]).map(lambda x: tokenizer(x[summary_column], truncation=True), batched=True, remove_columns=[text_column, summary_column])
target_lenghts = [len(x) for x in tokenized_targets["input_ids"]]
# use 95th percentile as max target length
max_target_length = int(np.percentile(target_lenghts, 95))
print(f"Max target length: {max_target_length}")
```
We now have everything needed to process our dataset.
```python
import os
def preprocess_function(sample, padding="max_length"):
# created prompted input
inputs = [prompt_template.format(input=item) for item in sample[text_column]]
# tokenize inputs
model_inputs = tokenizer(inputs, max_length=max_source_length, padding=padding, truncation=True)
# Tokenize targets with the `text_target` keyword argument
labels = tokenizer(text_target=sample[summary_column], max_length=max_target_length, padding=padding, truncation=True)
# If we are padding here, replace all tokenizer.pad_token_id in the labels by -100 when we want to ignore
# padding in the loss.
if padding == "max_length":
labels["input_ids"] = [
[(l if l != tokenizer.pad_token_id else -100) for l in label] for label in labels["input_ids"]
]
model_inputs["labels"] = labels["input_ids"]
return model_inputs
# process dataset
tokenized_dataset = dataset.map(preprocess_function, batched=True, remove_columns=list(dataset["train"].features))
# save dataset to disk
tokenized_dataset["train"].save_to_disk(os.path.join(save_dataset_path,"train"))
tokenized_dataset["test"].save_to_disk(os.path.join(save_dataset_path,"eval"))
```
### Fine-tune model using `deepspeed`
Done! We can now start training our model! We learned in the introduction that we would leverage the DeepSpeed integration with the Hugging Face Trainer. Therefore we need to create a `deespeed_config.json`. In the [DeepSpeed Configuration,](https://www.deepspeed.ai/docs/config-json/) we define the ZeRO strategy we want to use and if we want to use mixed precision training. The Hugging Face Trainer allows us to inherit values from the `TrainingArguments` in our `deepspeed_config.json` to avoid duplicate values, check the [documentation for more information.](https://huggingface.co/docs/transformers/v4.26.1/en/main_classes/deepspeed#configuration)
We created 4 deepspeed configurations for the experiments we ran, including `CPU offloading` and `mixed precision`:
- [ds_flan_t5_z3_config.json](https://github.com/philschmid/deep-learning-pytorch-huggingface/blob/main/training/configs/ds_flan_t5_z3_config.json)
- [ds_flan_t5_z3_config_bf16.json](https://github.com/philschmid/deep-learning-pytorch-huggingface/blob/main/training/configs/ds_flan_t5_z3_config_bf16.json)
- [ds_flan_t5_z3_offload.json](https://github.com/philschmid/deep-learning-pytorch-huggingface/blob/main/training/configs/ds_flan_t5_z3_offload.json)
- [ds_flan_t5_z3_offload_bf16.json](https://github.com/philschmid/deep-learning-pytorch-huggingface/blob/main/training/configs/ds_flan_t5_z3_offload_bf16.json)
Depending on your setup, you can use those, e.g. if you are running on NVIDIA V100s, you have to use the config without `bf16` since V100 are not support `bfloat16` types.
> When fine-tuning `T5` models we cannot use `fp16` since it leads to overflow issues, see: [#4586](https://github.com/huggingface/transformers/issues/4586), [#10830](https://github.com/huggingface/transformers/issues/10830), [#10956](https://github.com/huggingface/transformers/pull/10956)
As mentioned in the beginning, we are using a p4dn.24xlarge AWS EC2 Instance including 8x NVIDIA A100 40GB. This means we can leverage `bf16`, which reduces the memory footprint of the model by almost ~2x, which allows us to train without offloading efficiently.
We are going to use the [ds_flan_t5_z3_config_bf16.json](https://github.com/philschmid/deep-learning-pytorch-huggingface/blob/main/training/configs/ds_flan_t5_z3_config_bf16.json). If you are irritated by the `auto` values, check the [documentation](https://huggingface.co/docs/transformers/v4.26.1/en/main_classes/deepspeed#configuration).
```json
{
"bf16": {
"enabled": "auto"
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 3,
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": false
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```
Now, we need our training script. We prepared a [run_seq2seq_deepspeed.py](https://github.com/philschmid/deep-learning-pytorch-huggingface/blob/main/training/scripts/run_seq2seq_deepspeed.py) training script based on the [previous blog post](https://www.philschmid.de/fine-tune-flan-t5), which supports our deepspeed config and all other hyperparameters.
We can start our training with the `deepspeed` launcher providing the number of GPUs, the deepspeed config, and our hyperparameters, including our model id for `google/flan-t5-xxl`.
```bash
deepspeed --num_gpus=8 scripts/run_seq2seq_deepspeed.py \
--model_id google/flan-t5-xxl \
--dataset_path data \
--epochs 3 \
--per_device_train_batch_size 8 \
--per_device_eval_batch_size 8 \
--generation_max_length 129 \
--lr 1e-4 \
--deepspeed configs/ds_flan_t5_z3_config_bf16.json
```
Deepspeed now loads our model on the CPU and then splits it across our 8x A100 and starts the training. The training using the [CNN Dailymail Dataset](https://huggingface.co/datasets/cnn_dailymail) takes roughly 10 hours and costs `~$322`
# 3. Results & Experiments
During the creation of the tutorial and to get a better understanding of the hardware requirements, we ran a series of experiments for FLAN-T5 XL & XXL, which should help us evaluate and understand the hardware requirements and cost of training those models.
We ran the experiments only for ~20% of the training without evaluation and calculated the `duration` based on this estimate.
Below you'll find a table of the experiments and more information about the setup.
Dataset: [CNN Dailymail Dataset](https://huggingface.co/datasets/cnn_dailymail) with a train dataset size of `287113` samples with a sequence length of `512`
Hyperparameters: Epoch 3
Setup and instance types:
- 4x V100 16GB: p3.8xlarge
- 4x A10G 24GB: g5.24xlarge
- 8x V100 16GB: p3.16xlarge
- 8x A100 40GB: p4dn.24xlarge
| Model | DS ZeRO offload | Hardware | batch size per GPU | precision | duration | cost |
| ----------------- | --------------- | ------------ | ------------------ | --------- | -------- | ------ |
| FLAN-T5-XL (3B) | No | 4x V100 16GB | OOM | fp32 | - | - |
| FLAN-T5-XL (3B) | No | 8x V100 16GB | 1 | fp32 | 105h | ~$2570 |
| FLAN-T5-XL (3B) | No | 8x A100 40GB | 72 | bf16 | 2,5h | ~$81 |
| FLAN-T5-XL (3B) | Yes | 4x V100 16GB | 8 | fp32 | 69h | ~$828 |
| FLAN-T5-XL (3B) | Yes | 8x V100 16GB | 8 | fp32 | 32h | ~$768 |
| FLAN-T5-XXL (11B) | Yes | 4x V100 16GB | OOM | fp32 | - | - |
| FLAN-T5-XXL (11B) | Yes | 8x V100 16GB | OOM | fp32 | - | - |
| FLAN-T5-XXL (11B) | Yes | 4x A10G 24GB | 24 | bf16 | 90h | ~$732 |
| FLAN-T5-XXL (11B) | Yes | 8x A100 40GB | 48 | bf16 | 19h | ~$613 |
| FLAN-T5-XXL (11B) | No | 8x A100 40GB | 8 | bf16 | 10h | ~$322 |
We can see that `bf16` provides significant advantages over `fp32`. We could fit FLAN-T5-XXL on 4x A10G (24GB) but not on 8x V100 16GB.
We also learned that if the model fits on the GPUs with a batch size > 4 without offloading, we are ~2x faster and more cost-effective than offloading the model and scaling the batch size.
---
Thanks for reading! If you have any questions, feel free to contact me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Deploy BERT with Hugging Face Transformers, Amazon SageMaker and Terraform module | https://www.philschmid.de/terraform-huggingface-amazon-sagemaker | 2022-02-08 | [
"HuggingFace",
"AWS",
"BERT",
"Terraform"
] | Learn how to deploy BERT/DistilBERT with Hugging Face Transformers using Amazon SageMaker and Terraform module. | _“Infrastructure as Code (IaC) is **the managing and provisioning of infrastructure through code instead of through manual processes**. With IaC, configuration files are created that contain your infrastructure specifications, which makes it easier to edit and distribute configurations. It also ensures that you provision the same environment every time.”_ - [Red Hat](https://www.redhat.com/en/topics/automation/what-is-infrastructure-as-code-iac)
Provisioning infrastructure and deploying machine learning models in the past has been a time-consuming and costly manual process.
Especially due to faster time to market and shorter development and experimentation cycles, it must be possible to always rebuild, scale up, change and take down the infrastructure frequently. Without an IaC this wouldn’t be possible.
IaC enables not only [Continuous Deployment](https://www.suse.com/suse-defines/definition/continuous-deployment/) but also can:
- reduce cost
- increase in speed of deployments
- reduce errors, especially Human-errors
- improve infrastructure consistency
- eliminate configuration drift
I think it is clear to all of you that you need IaC to be able to run and implement machine learning projects successfully for enterprise-grade production workloads.
in this blog post, we will use one of the most popular open-source IaC (Terraform) to deploy DistilBERT into “production” using [Amazon SageMaker](https://aws.amazon.com/sagemaker/?nc1=h_ls) as the underlying infrastructure.
[HashiCorp Terraform](https://www.terraform.io/intro) is an IaC tool that lets you define resources in human-readable configuration files that you can version, reuse, and share. Terraform supports besides [Amazon Web Services](https://learn.hashicorp.com/collections/terraform/aws-get-started), also all other major cloud providers and tools like Docker and Kubernetes.\*\*
```java
provider "aws" {
profile = "default"
region = "us-west-2"
}
resource "aws_instance" "app_server" {
ami = "ami-830c94e3"
instance_type = "t2.micro"
tags = {
Name = "ExampleAppServerInstance"
}
}
```
This snippet would create an Amazon EC2 `t2.micro` instance in `us-west-2` using my `default` credentials and the `ami-830c94e3`. if you are not familiar with terraform or if you want to learn more about it first, you can look at their [“Build Infrastructure - Terraform AWS Example”](https://learn.hashicorp.com/tutorials/terraform/aws-build).
---
## The [sagemaker-huggingface](https://registry.terraform.io/modules/philschmid/sagemaker-huggingface/aws/latest) terraform module
In addition to [resources](https://www.terraform.io/language/resources), terraform also has includes the [module](https://www.terraform.io/language/modules) element. _“A module is a container for multiple [resources](https://www.terraform.io/language/resources) that are used together. Modules can be used to create lightweight abstractions, so that you can describe your infrastructure in terms of its architecture, rather than directly in terms of physical objects.”_
In order to deploy a Hugging Face Transformer to Amazon SageMaker for inference you always need [SageMaker Model](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-sagemaker-model.html), [SageMaker Endpoint Configuration](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-sagemaker-endpointconfig.html) & a [SageMaker Endpoint](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-sagemaker-endpoint.html). All of those components are available as [resources in terraform](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/sagemaker_model). We could use all of those individual components to deploy our model but it would make way more sense to create a [sagemaker-huggingface](https://registry.terraform.io/modules/philschmid/sagemaker-huggingface/aws/latest), which abstracts away required logic for selecting the correct Container or how I want to deploy my model. So we did this.
The [sagemaker-huggingface](https://registry.terraform.io/modules/philschmid/sagemaker-huggingface/aws/latest) Terraform module enables easy deployment of a [Hugging Face Transformer models](https://registry.terraform.io/modules/philschmid/sagemaker-huggingface/aws/hf.co/models) to [Amazon SageMaker](https://aws.amazon.com/de/sagemaker/) real-time endpoints. The module will create all the necessary resources to deploy a model to Amazon SageMaker including IAM roles, if not provided, SageMaker Model, SageMaker Endpoint Configuration, SageMaker Endpoint.
With the module, you can deploy [Hugging Face Transformer](https://registry.terraform.io/modules/philschmid/sagemaker-huggingface/aws/hf.co/models) directly from the [Model Hub](https://registry.terraform.io/modules/philschmid/sagemaker-huggingface/aws/hf.co/models) or from Amazon S3 to Amazon SageMaker for PyTorch and Tensorflow based models.
Registry: [sagemaker-huggingface](https://registry.terraform.io/modules/philschmid/sagemaker-huggingface/aws/latest)
Github: [philschmid/terraform-aws-sagemaker-huggingface](https://github.com/philschmid/terraform-aws-sagemaker-huggingface)
```java
module "sagemaker-huggingface" {
source = "philschmid/sagemaker-huggingface/aws"
version = "0.2.0"
name_prefix = "distilbert"
pytorch_version = "1.9.1"
transformers_version = "4.12.3"
instance_type = "ml.g4dn.xlarge"
instance_count = 1 # default is 1
hf_model_id = "distilbert-base-uncased-finetuned-sst-2-english"
hf_task = "text-classification"
}
```
---
## How to deploy DistilBERT with the [sagemaker-huggingface](https://registry.terraform.io/modules/philschmid/sagemaker-huggingface/aws/latest) terraform module
Before we get started, make sure you have the [Terraform](https://learn.hashicorp.com/tutorials/terraform/install-cli) installed and configured, as well as access to AWS Credentials to create the necessary services. [[Instructions](https://learn.hashicorp.com/tutorials/terraform/aws-build?in=terraform/aws-get-started#prerequisites)]
**What are we going to do:**
- create a new Terraform configuration
- initialize the AWS provider and our module
- deploy the DistilBERT model
- test the endpoint
- destroy the infrastructure
### Create a new Terraform configuration
Each Terraform configuration must be in its own directory including a `main.tf` file. Our first step is to create the `distilbert-terraform` directory with a `main.tf` file.
```bash
mkdir distilbert-terraform
touch distilbert-terraform/main.tf
cd distilbert-terraform
```
### Initialize the AWS provider and our module
Next, we need to open the `main.tf` in a text editor and add the `aws` provider as well as our `module`.
_Note: the snippet below assumes that you have an AWS profile `default` configured with the needed permissions_
```bash
provider "aws" {
profile = "default"
region = "us-east-1"
}
module "sagemaker-huggingface" {
source = "philschmid/sagemaker-huggingface/aws"
version = "0.2.0"
name_prefix = "distilbert"
pytorch_version = "1.9.1"
transformers_version = "4.12.3"
instance_type = "ml.g4dn.xlarge"
instance_count = 1 # default is 1
hf_model_id = "distilbert-base-uncased-finetuned-sst-2-english"
hf_task = "text-classification"
}
```
When we create a new configuration — or check out an existing configuration from version control — we need to initialize the directory with `terraform init`.
Initializing will download and install our AWS provider as well as the `sagemaker-huggingface` module.
```bash
terraform init
# Initializing modules...
# Downloading philschmid/sagemaker-huggingface/aws 0.2.0 for sagemaker-huggingface...
# - sagemaker-huggingface in .terraform/modules/sagemaker-huggingface
#
# Initializing the backend...
#
# Initializing provider plugins...
# - Finding latest version of hashicorp/aws...
# - Installing hashicorp/aws v3.74.1...
```
### Deploy the DistilBERT model
To deploy/apply our configuration we run `terraform apply` command. Terraform will then print out which resources are going to be created and ask us if we want to continue, which can we confirm with `yes`.
```bash
terraform apply
```
Now Terraform will deploy our model to Amazon SageMaker as a real-time endpoint. This can take 2-5 minutes.
### Test the endpoint
To test our deployed endpoint we can use the [aws sdk](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_runtime_InvokeEndpoint.html#API_runtime_InvokeEndpoint_SeeAlso) in our example we are going to use the Python SDK (`boto3`), but you can easily switch this to use Java, Javascript, .NET, or Go SDK to invoke the Amazon SageMaker endpoint.
To be able to invoke our endpoint we need the endpoint name. The Endpoint name in our module will always be `${name_prefix}-endpoint` so in our case it is `distilbert-endpoint`. You can also get the endpoint name by inspecting the output of Terraform with `terraform output` or going to the SageMaker service in the AWS Management console.
We create a new file `request.py` with the following snippet.
_Make sure you have configured your credentials (and region) correctly_
```python
import boto3
import json
client = boto3.client("sagemaker-runtime")
ENDPOINT_NAME = "distilbert-endpoint"
body={"inputs":"This terraform module is amazing."}
response = client.invoke_endpoint(
EndpointName=ENDPOINT_NAME,
ContentType="application/json",
Accept="application/json",
Body=json.dumps(body),
)
print(response['Body'].read().decode('utf-8'))
```
Now we can execute our request.
```python
python3 request.py
#[{"label":"POSITIVE","score":0.9998819828033447}]
```
### Destroy the infrastructure
To clean up our created resources we can run `terraform destroy`, which will delete all the created resources from the module.
## More examples
The [module](https://registry.terraform.io/modules/philschmid/sagemaker-huggingface/aws/latest) already includes several configuration options to deploy Hugging Face Transformers. You can find all examples in the [Github repository](https://github.com/philschmid/terraform-aws-sagemaker-huggingface/tree/master/examples) of the module
The Example we used has deployed a PyTorch model from the [Hugging Face Hub](http://hf.co/models). This is also possible for Tensorflow models. As well as deploying models from Amazon S3.
### Deploy Tensorflow models to Amazon SageMaker
```python
module "sagemaker-huggingface" {
source = "philschmid/sagemaker-huggingface/aws"
version = "0.2.0"
name_prefix = "tf-distilbert"
tensorflow_version = "2.5.1"
transformers_version = "4.12.3"
instance_type = "ml.g4dn.xlarge"
instance_count = 1 # default is 1
hf_model_id = "distilbert-base-uncased-finetuned-sst-2-english"
hf_task = "text-classification"
}
```
### Deploy Transformers from Amazon S3 to Amazon SageMaker
```python
module "sagemaker-huggingface" {
source = "philschmid/sagemaker-huggingface/aws"
version = "0.2.0"
name_prefix = "deploy-s3"
pytorch_version = "1.9.1"
transformers_version = "4.12.3"
instance_type = "ml.g4dn.xlarge"
instance_count = 1 # default is 1
model_data = "s3://my-bucket/mypath/model.tar.gz"
hf_task = "text-classification"
}
```
## Provide an existing IAM Role to deploy Amazon SageMaker
All of the examples from above are not including any IAM Role. The reason for this is that the module creates the required IAM Role for Amazon SageMaker together with the other resources, but it is also possible to provide an existing IAM Role for that with the `sagemaker_execution_role`.
```python
module "sagemaker-huggingface" {
source = "philschmid/sagemaker-huggingface/aws"
version = "0.2.0"
name_prefix = "distilbert"
pytorch_version = "1.9.1"
transformers_version = "4.12.3"
instance_type = "ml.g4dn.xlarge"
instance_count = 1 # default is 1
hf_model_id = "distilbert-base-uncased-finetuned-sst-2-english"
hf_task = "text-classification"
sagemaker_execution_role = "the_name_of_my_iam_role"
}
```
## Conclusion
The [sagemaker-huggingface](https://registry.terraform.io/modules/philschmid/sagemaker-huggingface/aws/latest) terraform module abstracts all the heavy lifting for deploying Transformer models to Amazon SageMaker away, which enables controlled, consistent and understandable managed deployments after concepts of IaC. This should help companies to move faster and include deployed models to Amazon SageMaker into their existing Applications and IaC definitions.
Give it a try and tell us what you think about the module.
The next step is going to be support for easily configurable autoscaling for the endpoints, where the heavy lifting is abstracted into the module.
---
Thanks for reading! If you have any questions, feel free to contact me, through [Github](https://github.com/huggingface/transformers), or on the [forum](https://discuss.huggingface.co/c/sagemaker/17). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Optimizing Transformers for GPUs with Optimum | https://www.philschmid.de/optimizing-transformers-with-optimum-gpu | 2022-07-13 | [
"BERT",
"OnnxRuntime",
"HuggingFace",
"Optimization"
] | Learn how to optimize Hugging Face Transformers models for NVIDIA GPUs using Optimum. You will learn how to optimize a DistilBERT for ONNX Runtime | In this session, you will learn how to optimize Hugging Face Transformers models for GPUs using Optimum. The session will show you how to convert you weights to fp16 weights and optimize a DistilBERT model using [Hugging Face Optimum](https://huggingface.co/docs/optimum/index) and [ONNX Runtime](https://onnxruntime.ai/). Hugging Face Optimum is an extension of 🤗 Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on targeted hardware. We are going to optimize a DistilBERT model for Question Answering, which was fine-tuned on the SQuAD dataset to decrease the latency from 7ms to 3ms for a sequence lenght of 128.
_Note: int8 quantization is currently only supported for CPUs. We plan to add support for in the near future using TensorRT._
By the end of this session, you will know how GPU optimization with Hugging Face Optimum can result in significant increase in model latency and througput while keeping 100% of the full-precision model.
You will learn how to:
1. [Setup Development Environment](#1-setup-development-environment)
2. [Convert a Hugging Face `Transformers` model to ONNX for inference](#2-convert-a-hugging-face-transformers-model-to-onnx-for-inference)
3. [Optimize model for GPU using `ORTOptimizer`](#3-optimize-model-for-gpu-using-ortoptimizer)
4. [Evaluate the performance and speed](#4-evaluate-the-performance-and-speed)
Let's get started! 🚀
_This tutorial was created and run on an g4dn.xlarge AWS EC2 Instance including a NVIDIA T4._
---
## 1. Setup Development Environment
Our first step is to install Optimum, along with Evaluate and some other libraries. Running the following cell will install all the required packages for us including Transformers, PyTorch, and ONNX Runtime utilities:
_Note: You need a machine with a GPU and CUDA installed. You can check this by running `nvidia-smi` in your terminal. If you have a correct environment you should statistics abour your GPU._
```python
%pip install "optimum[onnxruntime-gpu]==1.3.0" --upgrade
```
Before we start. Lets make sure we have the `CUDAExecutionProvider` for ONNX Runtime available.
```python
from onnxruntime import get_available_providers, get_device
import onnxruntime
# check available providers
assert 'CUDAExecutionProvider' in get_available_providers(), "ONNX Runtime GPU provider not found. Make sure onnxruntime-gpu is installed and onnxruntime is uninstalled."
assert "GPU" == get_device()
# asser version due to bug in 1.11.1
assert onnxruntime.__version__ > "1.11.1", "you need a newer version of ONNX Runtime"
```
> If you want to run inference on a CPU, you can install 🤗 Optimum with `pip install optimum[onnxruntime]`.
## 2. Convert a Hugging Face `Transformers` model to ONNX for inference
Before we can start optimizing our model we need to convert our vanilla `transformers` model to the `onnx` format. To do this we will use the new [ORTModelForQuestionAnswering](https://huggingface.co/docs/optimum/main/en/onnxruntime/modeling_ort#optimum.onnxruntime.ORTModelForQuestionAnswering) class calling the `from_pretrained()` method with the `from_transformers` attribute. The model we are using is the [distilbert-base-cased-distilled-squad](https://huggingface.co/distilbert-base-cased-distilled-squad) a fine-tuned DistilBERT-based model on the SQuAD dataset achieving an F1 score of `87.1` and as the feature (task) `question-answering`.
```python
from optimum.onnxruntime import ORTModelForQuestionAnswering
from transformers import AutoTokenizer
from pathlib import Path
model_id="distilbert-base-cased-distilled-squad"
onnx_path = Path("onnx")
# load vanilla transformers and convert to onnx
model = ORTModelForQuestionAnswering.from_pretrained(model_id, from_transformers=True)
tokenizer = AutoTokenizer.from_pretrained(model_id)
# save onnx checkpoint and tokenizer
model.save_pretrained(onnx_path)
tokenizer.save_pretrained(onnx_path)
```
Before we jump into the optimization of the model lets first evaluate the current performance of the model. Therefore we can use `pipeline()` function from 🤗 Transformers. Meaning we will measure the end-to-end latency including the pre- and post-processing features.
```python
context="Hello, my name is Philipp and I live in Nuremberg, Germany. Currently I am working as a Technical Lead at Hugging Face to democratize artificial intelligence through open source and open science. In the past I designed and implemented cloud-native machine learning architectures for fin-tech and insurance companies. I found my passion for cloud concepts and machine learning 5 years ago. Since then I never stopped learning. Currently, I am focusing myself in the area NLP and how to leverage models like BERT, Roberta, T5, ViT, and GPT2 to generate business value."
question="As what is Philipp working?"
```
After we prepared our payload we can create the inference `pipeline`.
```python
from transformers import pipeline
vanilla_qa = pipeline("question-answering", model=model, tokenizer=tokenizer, device=0)
print(f"pipeline is loaded on device {vanilla_qa.model.device}")
print(vanilla_qa(question=question,context=context))
# pipeline is loaded on device cuda:0
# {'score': 0.6575328707695007, 'start': 88, 'end': 102, 'answer': 'Technical Lead'}
```
If you are seeing a `CreateExecutionProviderInstance` error you are not having a compatible `cuda` version installed. Check the [documentation](https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html), which cuda version you need.
If you want to learn more about exporting transformers model check-out [Convert Transformers to ONNX with Hugging Face Optimum](https://www.philschmid.de/convert-transformers-to-onnx) blog post
## 3. Optimize model for GPU using `ORTOptimizer`
The [ORTOptimizer](https://huggingface.co/docs/optimum/onnxruntime/optimization#optimum.onnxruntime.ORTOptimizer) allows you to apply ONNX Runtime optimization on our Transformers models. In addition to the `ORTOptimizer` Optimum offers a [OptimizationConfig](https://huggingface.co/docs/optimum/onnxruntime/configuration#optimum.onnxruntime.configuration.OptimizationConfig) a configuration class handling all the ONNX Runtime optimization parameters.
There are several technique to optimize our model for GPUs including graph optimizations and converting our model weights from `fp32` to `fp16`.
Graph optimizations are essentially graph-level transformations, ranging from small graph simplifications and node eliminations to more complex node fusions and layout optimizations.
Examples of graph optimizations include:
- **Constant folding**: evaluate constant expressions at compile time instead of runtime
- **Redundant node elimination**: remove redundant nodes without changing graph structure
- **Operator fusion**: merge one node (i.e. operator) into another so they can be executed together
![operator fusion](/static/blog/optimizing-transformers-with-optimum-gpu/operator_fusion.png)
If you want to learn more about graph optimization you take a look at the [ONNX Runtime documentation](https://onnxruntime.ai/docs/performance/graph-optimizations.html).
To achieve best performance we will apply the following optimizations parameter in our `OptimizationConfig`:
- `optimization_level=99`: to enable all the optimizations. _Note: Switching Hardware after optimization can lead to issues._
- `optimize_for_gpu=True`: to enable GPU optimizations.
- `fp16=True`: to convert model computation from `fp32` to `fp16`. _Note: Only for V100 and T4 or newer._
```python
from optimum.onnxruntime import ORTOptimizer
from optimum.onnxruntime.configuration import OptimizationConfig
# create ORTOptimizer and define optimization configuration
optimizer = ORTOptimizer.from_pretrained(model_id, feature=model.pipeline_task)
optimization_config = OptimizationConfig(optimization_level=99,
optimize_for_gpu=True,
fp16=True
)
# apply the optimization configuration to the model
optimizer.export(
onnx_model_path=onnx_path / "model.onnx",
onnx_optimized_model_output_path=onnx_path / "model-optimized.onnx",
optimization_config=optimization_config,
)
```
To test performance we can use the ORTModelForSequenceClassification class again and provide an additional `file_name` parameter to load our optimized model. _(This also works for models available on the hub)._
```python
from transformers import pipeline
# load optimized model
model = ORTModelForQuestionAnswering.from_pretrained(onnx_path, file_name="model-optimized.onnx")
# create optimized pipeline
optimized_qa = pipeline("question-answering", model=model, tokenizer=tokenizer, device=0)
print(optimized_qa(question=question,context=context))
```
## 4. Evaluate the performance and speed
As the last step, we want to take a detailed look at the performance and accuracy of our model. Applying optimization techniques, like graph optimizations or mixed-precision not only impact performance (latency) those also might have an impact on the accuracy of the model. So accelerating your model comes with a trade-off.
Let's evaluate our models. Our transformers model [distilbert-base-cased-distilled-squad](https://huggingface.co/distilbert-base-cased-distilled-squad) was fine-tuned on the SQuAD dataset.
```python
from transformers import pipeline
trfs_qa = pipeline("question-answering", model=model_id, device=0)
```
```python
from datasets import load_metric,load_dataset
metric = load_metric("squad_v2")
metric = load_metric("squad")
eval_dataset = load_dataset("squad")["validation"]
# creating a subset for faster evaluation
# COMMENT IN to run evaluation on a subset of the dataset
# eval_dataset = eval_dataset.select(range(1000))
```
We can now leverage the [map](https://huggingface.co/docs/datasets/v2.1.0/en/process#map) function of [datasets](https://huggingface.co/docs/datasets/index) to iterate over the validation set of `squad_v2` and run prediction for each data point. Therefore we write a `evaluate` helper method which uses our pipelines and applies some transformation to work with the [squad v2 metric.](https://huggingface.co/metrics/squad_v2)
```python
def evaluate(example):
default = vanilla_qa(question=example["question"], context=example["context"])
optimized = optimized_qa(question=example["question"], context=example["context"])
return {
'reference': {'id': example['id'], 'answers': example['answers']},
'default': {'id': example['id'],'prediction_text': default['answer']},
'optimized': {'id': example['id'],'prediction_text': optimized['answer']},
}
result = eval_dataset.map(evaluate)
default_acc = metric.compute(predictions=result["default"], references=result["reference"])
optimized = metric.compute(predictions=result["optimized"], references=result["reference"])
print(f"vanilla model: f1={default_acc['f1']}%")
print(f"optimized model: f1={optimized['f1']}%")
print(f"The optimized model achieves {round(optimized['f1']/default_acc['f1'],2)*100:.2f}% accuracy of the fp32 model")
# vanilla model: f1=86.84859514665654%
# optimized model: f1=86.8536859246896%
# The optimized model achieves 100.00% accuracy of the fp32 model
```
Okay, now let's test the performance (latency) of our optimized model. We are going to use a payload with a sequence length of 128 for the benchmark. To keep it simple, we are going to use a python loop and calculate the avg,mean & p95 latency for our vanilla model and for the optimized model.
```python
from time import perf_counter
import numpy as np
context="Hello, my name is Philipp and I live in Nuremberg, Germany. Currently I am working as a Technical Lead at Hugging Face to democratize artificial intelligence through open source and open science. In the past I designed and implemented cloud-native machine learning architectures for fin-tech and insurance companies. I found my passion for cloud concepts and machine learning 5 years ago. Since then I never stopped learning. Currently, I am focusing myself in the area NLP and how to leverage models like BERT, Roberta, T5, ViT, and GPT2 to generate business value."
question="As what is Philipp working?"
def measure_latency(pipe):
latencies = []
# warm up
for _ in range(10):
_ = pipe(question=question,context=context)
# Timed run
for _ in range(300):
start_time = perf_counter()
_ = pipe(question=question,context=context)
latency = perf_counter() - start_time
latencies.append(latency)
# Compute run statistics
time_avg_ms = 1000 * np.mean(latencies)
time_std_ms = 1000 * np.std(latencies)
time_p95_ms = 1000 * np.percentile(latencies,95)
return f"P95 latency (ms) - {time_p95_ms}; Average latency (ms) - {time_avg_ms:.2f} +\- {time_std_ms:.2f};", time_p95_ms
vanilla_model=measure_latency(vanilla_qa)
optimized_model=measure_latency(optimized_qa)
print(f"Vanilla model: {vanilla_model[0]}")
print(f"Optimized model: {optimized_model[0]}")
print(f"Improvement through optimization: {round(vanilla_model[1]/optimized_model[1],2)}x")
# Vanilla model: P95 latency (ms) - 7.784631400227273; Average latency (ms) - 6.87 +\- 1.20;
# Optimized model: P95 latency (ms) - 3.392388850079442; Average latency (ms) - 3.32 +\- 0.03;
# Improvement through optimization: 2.29x
```
We managed to accelerate our model latency from 7.8ms to 3.4ms or 2.3x while keeping 100.00% of the accuracy.
![performance](/static/blog/optimizing-transformers-with-optimum-gpu/performance.png)
## Conclusion
We successfully optimized our vanilla Transformers model with Hugging Face Optimum and managed to accelerate our model latency from 7.8ms to 3.4ms or 2.3x while keeping 100.00% of the accuracy.
But I have to say that this isn't a plug and play process you can transfer to any Transformers model, task or dataset.
---
Thanks for reading. If you have any questions, feel free to contact me, through [Github](https://github.com/huggingface/optimum), or on the [forum](https://discuss.huggingface.co/). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Deploy FLAN-T5 XXL on Amazon SageMaker | https://www.philschmid.de/deploy-flan-t5-sagemaker | 2023-02-08 | [
"T5",
"SageMaker",
"HuggingFace",
"Inference"
] | Learn how to deploy Google's FLAN-T5 XXL on Amazon SageMaker for inference. | Welcome to this Amazon SageMaker guide on how to deploy the [FLAN-T5-XXL](https://huggingface.co/google/flan-t5-xxl) on Amazon SageMaker for inference. We will deploy [philschmid/flan-t5-xxl-sharded-fp16](https://huggingface.co/philschmid/flan-t5-xxl-sharded-fp16) to Amazon SageMake for real-time inference using Hugging Face Inference Deep Learning Container.
![flan-t5-on-amazon-sagemaker](/static/blog/deploy-flan-t5-sagemaker/sagemaker-endpoint.png)
What we are going to do
1. [Create FLAN-T5 XXL inference script with bnb quantization](#1-create-flan-t5-xxl-inference-script-with-bnb-quantization)
2. [Create SageMaker `model.tar.gz` artifact](#2-create-sagemaker-modeltargz-artifact)
3. [Deploy the model to Amazon SageMaker](#3-deploy-the-model-to-amazon-sagemaker)
4. [Run inference using the deployed model](#4-run-inference-using-the-deployed-model)
## Quick intro: FLAN-T5, just a better T5
FLAN-T5 released with the [Scaling Instruction-Finetuned Language Models](https://arxiv.org/pdf/2210.11416.pdf) paper is an enhanced version of T5 that has been finetuned in a mixture of tasks. The paper explores instruction finetuning with a particular focus on (1) scaling the number of tasks, (2) scaling the model size, and (3) finetuning on chain-of-thought data. The paper discovers that overall instruction finetuning is a general method for improving the performance and usability of pretrained language models.
![flan-t5](/static/blog/deploy-flan-t5-sagemaker/flan-t5.webp)
- Paper: https://arxiv.org/abs/2210.11416
- Official repo: https://github.com/google-research/t5x
---
Before we can get started we have to install the missing dependencies to be able to create our `model.tar.gz` artifact and create our Amazon SageMaker endpoint.
We also have to make sure we have the permission to create our SageMaker Endpoint.
```python
!pip install "sagemaker==2.116.0" "huggingface_hub==0.12.0" --upgrade --quiet
```
_If you are going to use Sagemaker in a local environment (not SageMaker Studio or Notebook Instances). You need access to an IAM Role with the required permissions for Sagemaker. You can find [here](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) more about it._
```python
import sagemaker
import boto3
sess = sagemaker.Session()
# sagemaker session bucket -> used for uploading data, models and logs
# sagemaker will automatically create this bucket if it not exists
sagemaker_session_bucket=None
if sagemaker_session_bucket is None and sess is not None:
# set to default bucket if a bucket name is not given
sagemaker_session_bucket = sess.default_bucket()
try:
role = sagemaker.get_execution_role()
except ValueError:
iam = boto3.client('iam')
role = iam.get_role(RoleName='sagemaker_execution_role')['Role']['Arn']
sess = sagemaker.Session(default_bucket=sagemaker_session_bucket)
print(f"sagemaker role arn: {role}")
print(f"sagemaker bucket: {sess.default_bucket()}")
print(f"sagemaker session region: {sess.boto_region_name}")
```
## 1. Create FLAN-T5 XXL inference script with bnb quantization
Amazon SageMaker allows us to customize the inference script by providing a `inference.py` file. The `inference.py` file is the entry point to our model. It is responsible for loading the model and handling the inference request. If you are used to deploying Hugging Face Transformers that might be new to you. Usually, we just provide the `HF_MODEL_ID` and `HF_TASK` and the Hugging Face DLC takes care of the rest. For `FLAN-T5-XXL` thats not yet possible. We have to provide the `inference.py` file and implement the `model_fn` and `predict_fn` functions to efficiently load the 11B large model.
If you want to learn more about creating a custom inference script you can check out [Creating document embeddings with Hugging Face's Transformers & Amazon SageMaker](https://www.philschmid.de/custom-inference-huggingface-sagemaker)
In addition to the `inference.py` file we also have to provide a `requirements.txt` file. The `requirements.txt` file is used to install the dependencies for our `inference.py` file.
The first step is to create a `code/` directory.
```python
!mkdir code
```
As next we create a `requirements.txt` file and add the `accelerate` and `bitsandbytes` library to it. The `accelerate` library is used efficiently to load the model on the GPU. The `bitsandbytes` library is used to quantize the model to 8bit using LLM.int8(). LLM.int8 introduces a new quantization technique for Int8 matrix multiplication, which cuts the memory needed for inference by half while. To learn more about check out this [blog post](https://huggingface.co/blog/hf-bitsandbytes-integration) or the [paper](https://arxiv.org/abs/2208.07339).
```python
%%writefile code/requirements.txt
accelerate==0.16.0
transformers==4.26.0
bitsandbytes==0.37.0
```
The last step for our inference handler is to create the `inference.py` file. The `inference.py` file is responsible for loading the model and handling the inference request. The `model_fn` function is called when the model is loaded. The `predict_fn` function is called when we want to do inference.
We are using the `AutoModelForSeq2SeqLM` class from transformers load the model from the local directory (`model_dir`) in the `model_fn`. In the `predict_fn` function we are using the `generate` function from transformers to generate the text for a given input prompt.
```python
%%writefile code/inference.py
from typing import Dict, List, Any
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
import torch
def model_fn(model_dir):
# load model and processor from model_dir
model = AutoModelForSeq2SeqLM.from_pretrained(model_dir, device_map="auto", load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained(model_dir)
return model, tokenizer
def predict_fn(data, model_and_tokenizer):
# unpack model and tokenizer
model, tokenizer = model_and_tokenizer
# process input
inputs = data.pop("inputs", data)
parameters = data.pop("parameters", None)
# preprocess
input_ids = tokenizer(inputs, return_tensors="pt").input_ids
# pass inputs with all kwargs in data
if parameters is not None:
outputs = model.generate(input_ids, **parameters)
else:
outputs = model.generate(input_ids)
# postprocess the prediction
prediction = tokenizer.decode(outputs[0], skip_special_tokens=True)
return [{"generated_text": prediction}]
```
## 2. Create SageMaker `model.tar.gz` artifact
To use our `inference.py` we need to bundle it together with our model weights into a `model.tar.gz`. The archive includes all our model-artifcats to run inference. The `inference.py` script will be placed into a `code/` folder. We will use the `huggingface_hub` SDK to easily download [philschmid/flan-t5-xxl-sharded-fp16](https://huggingface.co/philschmid/flan-t5-xxl-sharded-fp16) from [Hugging Face](https://huggingface.co/philschmid/flan-t5-xxl-sharded-fp16) and then upload it to Amazon S3 with the `sagemaker` SDK. The model `philschmid/flan-t5-xxl-sharded-fp16` is a sharded fp16 version of the [google/flan-t5-xxl](https://huggingface.co/google/flan-t5-xxl)
Make sure the enviornment has enough diskspace to store the model, ~30GB should be enough.
```python
from distutils.dir_util import copy_tree
from pathlib import Path
from tempfile import TemporaryDirectory
from huggingface_hub import snapshot_download
HF_MODEL_ID="philschmid/flan-t5-xxl-sharded-fp16"
# create model dir
model_tar_dir = Path(HF_MODEL_ID.split("/")[-1])
model_tar_dir.mkdir()
# setup temporary directory
with TemporaryDirectory() as tmpdir:
# download snapshot
snapshot_dir = snapshot_download(repo_id=HF_MODEL_ID, cache_dir=tmpdir)
# copy snapshot to model dir
copy_tree(snapshot_dir, str(model_tar_dir))
```
The next step is to copy the `code/` directory into the `model/` directory.
```python
# copy code/ to model dir
copy_tree("code/", str(model_tar_dir.joinpath("code")))
```
Before we can upload the model to Amazon S3 we have to create a `model.tar.gz` archive. Important is that the archive should directly contain all files and not a folder with the files. For example, your file should look like this:
```
model.tar.gz/
|- config.json
|- pytorch_model-00001-of-00012.bin
|- tokenizer.json
|- ...
```
```python
import tarfile
import os
# helper to create the model.tar.gz
def compress(tar_dir=None,output_file="model.tar.gz"):
parent_dir=os.getcwd()
os.chdir(tar_dir)
with tarfile.open(os.path.join(parent_dir, output_file), "w:gz") as tar:
for item in os.listdir('.'):
print(item)
tar.add(item, arcname=item)
os.chdir(parent_dir)
compress(str(model_tar_dir))
```
After we created the `model.tar.gz` archive we can upload it to Amazon S3. We will use the `sagemaker` SDK to upload the model to our sagemaker session bucket.
```python
from sagemaker.s3 import S3Uploader
# upload model.tar.gz to s3
s3_model_uri = S3Uploader.upload(local_path="model.tar.gz", desired_s3_uri=f"s3://{sess.default_bucket()}/flan-t5-xxl")
print(f"model uploaded to: {s3_model_uri}")
```
## 3. Deploy the model to Amazon SageMaker
After we have uploaded our model archive we can deploy our model to Amazon SageMaker. We will use `HuggingfaceModel` to create our real-time inference endpoint.
We are going to deploy the model to an `g5.xlarge` instance. The `g5.xlarge` instance is a GPU instance with 1 NVIDIA A10G GPU. If you are interested in how you could add autoscaling to your endpoint you can check out [Going Production: Auto-scaling Hugging Face Transformers with Amazon SageMaker](https://www.philschmid.de/auto-scaling-sagemaker-huggingface).
```python
from sagemaker.huggingface.model import HuggingFaceModel
# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
model_data=s3_model_uri, # path to your model and script
role=role, # iam role with permissions to create an Endpoint
transformers_version="4.17", # transformers version used
pytorch_version="1.10", # pytorch version used
py_version='py38', # python version used
)
# deploy the endpoint endpoint
predictor = huggingface_model.deploy(
initial_instance_count=1,
instance_type="ml.g5.xlarge"
)
```
## 4. Run inference using the deployed model
The `.deploy()` returns an `HuggingFacePredictor` object which can be used to request inference using the `.predict()` method. Our endpoint expects a `json` with at least `inputs` key.
When using generative models, most of the time you want to configure or customize your prediction to fit your needs, for example by using beam search, configuring the max or min length of the generated sequence, or adjusting the temperature to reduce repetition.
The Transformers library provides different strategies and kwargs to do this, the Hugging Face Inference toolkit offers the same functionality using the parameters attribute of your request payload. Below you can find examples on how to generate text without parameters, with beam search, and using custom configurations. If you want to learn about different decoding strategies check out this [blog post](https://huggingface.co/blog/how-to-generate).
```python
payload = """Summarize the following text:
Peter and Elizabeth took a taxi to attend the night party in the city. While in the party, Elizabeth collapsed and was rushed to the hospital.
Since she was diagnosed with a brain injury, the doctor told Peter to stay besides her until she gets well.
Therefore, Peter stayed with her at the hospital for 3 days without leaving.
"""
parameters = {
"early_stopping": True,
"length_penalty": 2.0,
"max_new_tokens": 50,
"temperature": 0,
"min_length": 10,
"no_repeat_ngram_size": 3,
}
# Run prediction
predictor.predict({
"inputs": payload,
"parameters" :parameters
})
# [{'generated_text': 'Peter stayed with Elizabeth at the hospital for 3 days.'}]
```
Lets try another examples! This time we focus ond questions answering with a step by step approach including some simple math.
```python
payload = """Answer the following question step by step:
Roger has 5 tennis balls. He buys 2 more cans of tennis balls.
Each can has 3 tennis balls. How many tennis balls does he have now?
"""
parameters = {
"early_stopping": True,
"length_penalty": 2.0,
"max_new_tokens": 50,
"temperature": 0,
}
# Run prediction
predictor.predict({
"inputs": payload,
"parameters" :parameters
})
# [{'generated_text': 'He buys 2 cans of tennis balls, so he has 2 * 3 = 6 tennis balls. He has 5 + 6 = 11 tennis balls now.'}]
```
### Delete model and endpoint
To clean up, we can delete the model and endpoint.
```python
predictor.delete_model()
predictor.delete_endpoint()
```
---
Thanks for reading! If you have any questions, feel free to contact me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Optimizing Transformers with Hugging Face Optimum | https://www.philschmid.de/optimizing-transformers-with-optimum | 2022-06-30 | [
"BERT",
"OnnxRuntime",
"HuggingFace",
"Optimization"
] | Learn how to optimize Hugging Face Transformers models using Optimum. The session will show you how to dynamically quantize and optimize a DistilBERT model using Hugging Face Optimum and ONNX Runtime. Hugging Face Optimum is an extension of 🤗 Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on targeted hardware. | _last update: 2022-11-18_
In this session, you will learn how to optimize Hugging Face Transformers models using Optimum. The session will show you how to dynamically quantize and optimize a DistilBERT model using [Hugging Face Optimum](https://huggingface.co/docs/optimum/index) and [ONNX Runtime](https://onnxruntime.ai/). Hugging Face Optimum is an extension of 🤗 Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on targeted hardware.
Note: dynamic quantization is currently only supported for CPUs, so we will not be utilizing GPUs / CUDA in this session.
By the end of this session, you see how quantization and optimization with Hugging Face Optimum can result in significant increase in model latency while keeping almost 100% of the full-precision model. Furthermore, you’ll see how to easily apply some advanced quantization and optimization techniques shown here so that your models take much less of an accuracy hit than they would otherwise.
You will learn how to:
1. [Setup Development Environment](#1-setup-development-environment)
2. [Convert a Hugging Face `Transformers` model to ONNX for inference](#2-convert-a-hugging-face-transformers-model-to-onnx-for-inference)
3. [Apply graph optimization techniques to the ONNX model](#3-apply-graph-optimization-techniques-to-the-onnx-model)
4. [Apply dynamic quantization using `ORTQuantizer` from Optimum](#4-apply-dynamic-quantization-using-ortquantizer-from-optimum)
5. [Test inference with the quantized model](#5-test-inference-with-the-quantized-model)
6. [Evaluate the performance and speed](#6-evaluate-the-performance-and-speed)
7. [Push the quantized model to the Hub](#7-push-the-quantized-model-to-the-hub)
8. [Load and run inference with a quantized model from the hub](#8-load-and-run-inference-with-a-quantized-model-from-the-hub)
Let's get started! 🚀
_This tutorial was created and run on an c6i.xlarge AWS EC2 Instance._
---
## 1. Setup Development Environment
Our first step is to install Optimum, along with Evaluate and some other libraries. Running the following cell will install all the required packages for us including Transformers, PyTorch, and ONNX Runtime utilities:
```python
!pip install "optimum[onnxruntime]==1.5.0" evaluate[evaluator] sklearn mkl-include mkl --upgrade
```
> If you want to run inference on a GPU, you can install 🤗 Optimum with `pip install optimum[onnxruntime-gpu]`.
## 2. Convert a Hugging Face `Transformers` model to ONNX for inference
Before we can start qunatizing we need to convert our vanilla `transformers` model to the `onnx` format. To do this we will use the new [ORTModelForSequenceClassification](https://huggingface.co/docs/optimum/main/en/onnxruntime/modeling_ort#optimum.onnxruntime.ORTModelForSequenceClassification) class calling the `from_pretrained()` method with the `from_transformers` attribute. The model we are using is the [optimum/distilbert-base-uncased-finetuned-banking77](https://huggingface.co/optimum/distilbert-base-uncased-finetuned-banking77) a fine-tuned DistilBERT model on the Banking77 dataset achieving an Accuracy score of `92.5` and as the feature (task) `text-classification`.
```python
from optimum.onnxruntime import ORTModelForSequenceClassification
from transformers import AutoTokenizer
from pathlib import Path
model_id="optimum/distilbert-base-uncased-finetuned-banking77"
dataset_id="banking77"
onnx_path = Path("onnx")
# load vanilla transformers and convert to onnx
model = ORTModelForSequenceClassification.from_pretrained(model_id, from_transformers=True)
tokenizer = AutoTokenizer.from_pretrained(model_id)
# save onnx checkpoint and tokenizer
model.save_pretrained(onnx_path)
tokenizer.save_pretrained(onnx_path)
```
One neat thing about 🤗 Optimum, is that allows you to run ONNX models with the `pipeline()` function from 🤗 Transformers. This means that you get all the pre- and post-processing features for free, without needing to re-implement them for each model! Here's how you can run inference with our vanilla ONNX model:
```python
from transformers import pipeline
vanilla_clf = pipeline("text-classification", model=model, tokenizer=tokenizer)
vanilla_clf("Could you assist me in finding my lost card?")
#[{'label': 'lost_or_stolen_card', 'score': 0.9664045572280884}]
```
If you want to learn more about exporting transformers model check-out [Convert Transformers to ONNX with Hugging Face Optimum](https://www.philschmid.de/convert-transformers-to-onnx) blog post
## 3. Apply graph optimization techniques to the ONNX model
Graph optimizations are essentially graph-level transformations, ranging from small graph simplifications and node eliminations to more complex node fusions and layout optimizations.
Examples of graph optimizations include:
- **Constant folding**: evaluate constant expressions at compile time instead of runtime
- **Redundant node elimination**: remove redundant nodes without changing graph structure
- **Operator fusion**: merge one node (i.e. operator) into another so they can be executed together
![operator fusion](![performance](/static/blog/optimizing-transformers-with-optimum/operator_fusion.png)
If you want to learn more about graph optimization you take a look at the [ONNX Runtime documentation](https://onnxruntime.ai/docs/performance/graph-optimizations.html). We are going to first optimize the model and then dynamically quantize to be able to use transformers specific operators such as QAttention for quantization of attention layers.
To apply graph optimizations to our ONNX model, we will use the `ORTOptimizer()`. The `ORTOptimizer` makes it with the help of a `OptimizationConfig` easy to optimize. The `OptimizationConfig` is the configuration class handling all the ONNX Runtime optimization parameters.
```python
from optimum.onnxruntime import ORTOptimizer
from optimum.onnxruntime.configuration import OptimizationConfig
# create ORTOptimizer and define optimization configuration
optimizer = ORTOptimizer.from_pretrained(model)
optimization_config = OptimizationConfig(optimization_level=99) # enable all optimizations
# apply the optimization configuration to the model
optimizer.optimize(
save_dir=onnx_path,
optimization_config=optimization_config,
)
```
To test performance we can use the ORTModelForSequenceClassification class again and provide an additional `file_name` parameter to load our optimized model. _(This also works for models available on the hub)._
```python
from transformers import pipeline
# load optimized model
model = ORTModelForSequenceClassification.from_pretrained(onnx_path, file_name="model_optimized.onnx")
# create optimized pipeline
optimized_clf = pipeline("text-classification", model=model, tokenizer=tokenizer)
optimized_clf("Could you assist me in finding my lost card?")
```
## 4. Apply dynamic quantization using `ORTQuantizer` from Optimum
After we have optimized our model we can accelerate it even more by quantizing it using the `ORTQuantizer`. The `ORTQuantizer` can be used to apply dynamic quantization to decrease the size of the model size and accelerate latency and inference.
_We use the `avx512_vnni` config since the instance is powered by an intel ice-lake CPU supporting avx512._
```python
from optimum.onnxruntime import ORTQuantizer
from optimum.onnxruntime.configuration import AutoQuantizationConfig
# create ORTQuantizer and define quantization configuration
dynamic_quantizer = ORTQuantizer.from_pretrained(model)
dqconfig = AutoQuantizationConfig.avx512_vnni(is_static=False, per_channel=False)
# apply the quantization configuration to the model
model_quantized_path = dynamic_quantizer.quantize(
save_dir=onnx_path,
quantization_config=dqconfig,
)
```
Lets quickly check the new model size.
```python
import os
# get model file size
size = os.path.getsize(onnx_path / "model_optimized.onnx")/(1024*1024)
quantized_model = os.path.getsize(onnx_path / "model_optimized_quantized.onnx")/(1024*1024)
print(f"Model file size: {size:.2f} MB")
print(f"Quantized Model file size: {quantized_model:.2f} MB")
```
## 5. Test inference with the quantized model
[Optimum](https://huggingface.co/docs/optimum/main/en/pipelines#optimizing-with-ortoptimizer) has built-in support for [transformers pipelines](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#pipelines). This allows us to leverage the same API that we know from using PyTorch and TensorFlow models.
Therefore we can load our quantized model with `ORTModelForSequenceClassification` class and transformers `pipeline`.
```python
from optimum.onnxruntime import ORTModelForSequenceClassification
from transformers import pipeline, AutoTokenizer
model = ORTModelForSequenceClassification.from_pretrained(onnx_path,file_name="model_optimized_quantized.onnx")
tokenizer = AutoTokenizer.from_pretrained(onnx_path)
q8_clf = pipeline("text-classification",model=model, tokenizer=tokenizer)
q8_clf("What is the exchange rate like on this app?")
```
## 6. Evaluate the performance and speed
We can now leverage the map function of datasets to iterate over the validation set of squad 2 and run prediction for each data point. Therefore we write a evaluate helper method which uses our pipelines and applies some transformation to work with the squad v2 metric.
```python
from evaluate import evaluator
from datasets import load_dataset
eval = evaluator("text-classification")
eval_dataset = load_dataset("banking77", split="test")
results = eval.compute(
model_or_pipeline=q8_clf,
data=eval_dataset,
metric="accuracy",
input_column="text",
label_column="label",
label_mapping=model.config.label2id,
strategy="simple",
)
print(results)
```
```python
print(f"Vanilla model: 92.5%")
print(f"Quantized model: {results['accuracy']*100:.2f}%")
print(f"The quantized model achieves {round(results['accuracy']/0.925,4)*100:.2f}% accuracy of the fp32 model")
# Vanilla model: 92.5%
# Quantized model: 92.24%
# The quantized model achieves 99.72% accuracy of the fp32 model
```
Okay, now let's test the performance (latency) of our quantized model. We are going to use a payload with a sequence length of 128 for the benchmark. To keep it simple, we are going to use a python loop and calculate the avg,mean & p95 latency for our vanilla model and for the quantized model.
```python
from time import perf_counter
import numpy as np
payload="Hello my name is Philipp. I am getting in touch with you because i didn't get a response from you. What do I need to do to get my new card which I have requested 2 weeks ago? Please help me and answer this email in the next 7 days. Best regards and have a nice weekend "*2
print(f'Payload sequence length: {len(tokenizer(payload)["input_ids"])}')
def measure_latency(pipe):
latencies = []
# warm up
for _ in range(10):
_ = pipe(payload)
# Timed run
for _ in range(300):
start_time = perf_counter()
_ = pipe(payload)
latency = perf_counter() - start_time
latencies.append(latency)
# Compute run statistics
time_avg_ms = 1000 * np.mean(latencies)
time_std_ms = 1000 * np.std(latencies)
time_p95_ms = 1000 * np.percentile(latencies,95)
return f"P95 latency (ms) - {time_p95_ms}; Average latency (ms) - {time_avg_ms:.2f} +\- {time_std_ms:.2f};", time_p95_ms
vanilla_model=measure_latency(vanilla_clf)
quantized_model=measure_latency(q8_clf)
print(f"Vanilla model: {vanilla_model[0]}")
print(f"Quantized model: {quantized_model[0]}")
print(f"Improvement through quantization: {round(vanilla_model[1]/quantized_model[1],2)}x")
# Payload sequence length: 128
# Vanilla model: P95 latency (ms) - 68.4711932000539; Average latency (ms) - 56.28 +\- 6.66;
# Quantized model: P95 latency (ms) - 27.554391949979617; Average latency (ms) - 27.29 +\- 0.15;
# Improvement through quantization: 2.48x
```
We managed to accelerate our model latency from 68.4ms to 27.55ms or 2.48x while keeping 99.72% of the accuracy.
![performance](/static/blog/optimizing-transformers-with-optimum/performance.png)
## 7. Push the quantized model to the Hub
The Optimum model classes like `ORTModelForSequenceClassification` are integrated with the Hugging Face Model Hub, which means you can not only load model from the Hub, but also push your models to the Hub with `push_to_hub()` method. That way we can now save our qunatized model on the Hub to be for example used inside our inference API.
_We have to make sure that we are also saving the `tokenizer` as well as the `config.json` to have a good inference experience._
If you haven't logged into the `huggingface hub` yet you can use the `notebook_login` to do so.
```python
from huggingface_hub import notebook_login
notebook_login()
```
After we have configured our hugging face hub credentials we can push the model.
```python
from transformers import AutoTokenizer
from optimum.onnxruntime import ORTModelForSequenceClassification
tmp_store_directory="onnx_hub_repo"
repository_id="distilbert-onnx-banking77"
model.save_pretrained(tmp_store_directory)
tokenizer.save_pretrained(tmp_store_directory)
model.push_to_hub(tmp_store_directory,
repository_id=repository_id,
use_auth_token=True
)
```
## 8. Load and run inference with a quantized model from the hub
This step serves as a demonstration of how you could use optimum in your api to load and use our qunatized model.
```python
from optimum.onnxruntime import ORTModelForSequenceClassification
from transformers import pipeline, AutoTokenizer
model = ORTModelForSequenceClassification.from_pretrained("philschmid/distilbert-onnx-banking77")
tokenizer = AutoTokenizer.from_pretrained("philschmid/distilbert-onnx-banking77")
remote_clx = pipeline("text-classification",model=model, tokenizer=tokenizer)
remote_clx("What is the exchange rate like on this app?")
```
## Conclusion
We successfully quantized our vanilla Transformers model with Hugging Face Optimum and managed to decrease our model latency from 68.4ms to 27.55ms or 2.48x while keeping 99.72% of the accuracy.
---
Thanks for reading. If you have any questions, feel free to contact me, through [Github](https://github.com/huggingface/optimum), or on the [forum](https://discuss.huggingface.co/). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Accelerated document embeddings with Hugging Face Transformers and AWS Inferentia | https://www.philschmid.de/huggingface-sentence-transformers-aws-inferentia | 2022-04-19 | [
"HuggingFace",
"AWS",
"BERT",
"Inferentia"
] | Learn how to accelerate Sentence Transformers inference inference using Hugging Face Transformers and AWS Inferentia. | notebook: [sentence-transformers-huggingface-inferentia](https://github.com/philschmid/sentence-transformers-huggingface-inferentia/blob/main/sagemaker-notebook.ipynb)
The adoption of [BERT](https://huggingface.co/blog/bert-101) and [Transformers](https://huggingface.co/docs/transformers/index) continues to grow. Transformer-based models are now not only achieving state-of-the-art performance in Natural Language Processing but also for [Computer Vision](https://arxiv.org/abs/2010.11929), [Speech](https://arxiv.org/abs/2006.11477), [Time-Series](https://arxiv.org/abs/2002.06103) and especially [Semantic Search](https://www.sbert.net/examples/applications/semantic-search/README.html). 💬 🖼 🎤 ⏳
Semantic search seeks to improve search accuracy by understanding the content of the search query. The idea behind semantic search is to embed all documents into a vector space. At search time, the query is embedded into the same vector space and the closest embeddings are found. Transformers have taken over this domain and are currently achieving state-of-the-art performance but they are often slow and search shouldn’t be slow to feel natural.
AWS's take to solve the performance challenge was to design a custom machine learning chip designed for optimized inference workload called [AWS Inferentia](https://aws.amazon.com/machine-learning/inferentia/?nc1=h_ls). AWS says that AWS Inferentia _“delivers up to 80% lower cost per inference and up to 2.3X higher throughput than comparable current generation GPU-based Amazon EC2 instances.”_
The real value of AWS Inferentia instances compared to GPU comes through the multiple Neuron Cores available on each device. A Neuron Core is the custom accelerator inside AWS Inferentia. Each Inferentia chip comes with 4x Neuron Cores. This enables you to either load 1 model on each core (for high throughput) or 1 model across all cores (for lower latency).
---
In this end-to-end tutorial, you will learn how to speed up [Sentence-Transformers](https://www.sbert.net/) like SBERT for creating sentence embedding using Hugging Face Transformers, Amazon SageMaker, and AWS Inferentia to achieve sub 5ms latency and up to 1000 requests per second per instance.
You will learn how to:
- [1. Convert your Hugging Face sentence transformers to AWS Neuron (Inferentia)](#1-convert-your-hugging-face-sentence-transformers-to-aws-neuron)
- [2. Create a custom `inference.py` script for `sentence-embeddings`](#2-create-a-custom-inferencepy-script-for-sentence-embeddings)
- [3. Create and upload the neuron model and inference script to Amazon S3](#3-create-and-upload-the-neuron-model-and-inference-script-to-amazon-s3)
- [4. Deploy a Real-time Inference Endpoint on Amazon SageMaker](#4-deploy-a-real-time-inference-endpoint-on-amazon-sagemaker)
- [5. Run and evaluate Inference performance of BERT on Inferentia](#5-run-and-evaluate-inference-performance-of-bert-on-inferentia)
Let's get started! 🚀
---
_If you are going to use Sagemaker in a local environment (not SageMaker Studio or Notebook Instances). You need access to an IAM Role with the required permissions for Sagemaker. You can find [here](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) more about it._
## 1. Convert your Hugging Face Sentence Transformers to AWS Neuron
We are going to use the [AWS Neuron SDK for AWS Inferentia](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/index.html). The Neuron SDK includes a deep learning compiler, runtime, and tools for converting and compiling PyTorch and TensorFlow models to neuron compatible models, which can be run on [EC2 Inf1 instances](https://aws.amazon.com/ec2/instance-types/inf1/).
As a first step, we need to install the [Neuron SDK](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-intro/neuron-install-guide.html) and the required packages.
_Tip: If you are using Amazon SageMaker Notebook Instances or Studio you can go with the `conda_python3` conda kernel._
```python
# Set Pip repository to point to the Neuron repository
!pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com
# Install Neuron PyTorch
!pip install torch-neuron==1.9.1.* neuron-cc[tensorflow] sagemaker>=2.79.0 transformers==4.12.3 --upgrade
```
After we have installed the Neuron SDK we can convert load and convert our model. Neuron models are converted using `torch_neuron` with its `trace` method similar to `torchscript`. You can find more information in our [documentation](https://huggingface.co/docs/transformers/serialization#torchscript).
We are going to use the [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) model. It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
```python
model_id = "sentence-transformers/all-MiniLM-L6-v2"
```
At the time of writing, the [AWS Neuron SDK does not support dynamic shapes](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/models/models-inferentia.html#dynamic-shapes), which means that the input size needs to be static for compiling and inference.
In simpler terms, this means when the model is compiled with an input of batch size 1 and sequence length of 16. The model can only run inference on inputs with the same shape.
_When using a `t2.medium` instance the compiling takes around 2-3 minutes_
```python
import os
import tensorflow # to workaround a protobuf version conflict issue
import torch
import torch.neuron
from transformers import AutoTokenizer, AutoModel
# load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModel.from_pretrained(model_id, torchscript=True)
# create dummy input for max length 128
dummy_input = "dummy input which will be padded later"
max_length = 128
embeddings = tokenizer(dummy_input, max_length=max_length, padding="max_length",return_tensors="pt")
neuron_inputs = tuple(embeddings.values())
# compile model with torch.neuron.trace and update config
model_neuron = torch.neuron.trace(model, neuron_inputs)
model.config.update({"traced_sequence_length": max_length})
# save tokenizer, neuron model and config for later use
save_dir="tmp"
os.makedirs("tmp",exist_ok=True)
model_neuron.save(os.path.join(save_dir,"neuron_model.pt"))
tokenizer.save_pretrained(save_dir)
model.config.save_pretrained(save_dir)
```
## 2. Create a custom inference.py script for sentence-embeddings
The [Hugging Face Inference Toolkit](https://github.com/aws/sagemaker-huggingface-inference-toolkit) supports zero-code deployments on top of the [pipeline feature](https://huggingface.co/transformers/main_classes/pipelines.html) from 🤗 Transformers. This allows users to deploy Hugging Face transformers without an inference script [[Example](https://github.com/huggingface/notebooks/blob/master/sagemaker/11_deploy_model_from_hf_hub/deploy_transformer_model_from_hf_hub.ipynb)].
Currently is this feature not supported with AWS Inferentia, which means we need to provide an `inference.py` for running inference.
_If you would be interested in support for zero-code deployments for inferentia let us know on the [forum](https://discuss.huggingface.co/c/sagemaker/17)._
---
To use the inference script, we need to create an `inference.py` script. In our example, we are going to overwrite the `model_fn` to load our neuron model and the `predict_fn` to create a sentence-embeddings pipeline.
If you want to know more about the `inference.py` script check out this [example](https://github.com/huggingface/notebooks/blob/master/sagemaker/17_custom_inference_script/sagemaker-notebook.ipynb). It explains amongst other things what the `model_fn` and `predict_fn` are.
```python
!mkdir code
```
We are using the `NEURON_RT_NUM_CORES=1` to make sure that each HTTP worker uses 1 Neuron core to maximize throughput.
```python
%%writefile code/inference.py
import os
from transformers import AutoConfig, AutoTokenizer
import torch
import torch.neuron
import torch.nn.functional as F
# To use one neuron core per worker
os.environ["NEURON_RT_NUM_CORES"] = "1"
# saved weights name
AWS_NEURON_TRACED_WEIGHTS_NAME = "neuron_model.pt"
# Helper: Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
def model_fn(model_dir):
# load tokenizer and neuron model from model_dir
tokenizer = AutoTokenizer.from_pretrained(model_dir)
model = torch.jit.load(os.path.join(model_dir, AWS_NEURON_TRACED_WEIGHTS_NAME))
model_config = AutoConfig.from_pretrained(model_dir)
return model, tokenizer, model_config
def predict_fn(data, model_tokenizer_model_config):
# destruct model and tokenizer
model, tokenizer, model_config = model_tokenizer_model_config
# Tokenize sentences
inputs = data.pop("inputs", data)
encoded_input = tokenizer(
inputs,
return_tensors="pt",
max_length=model_config.traced_sequence_length,
padding="max_length",
truncation=True,
)
# convert to tuple for neuron model
neuron_inputs = tuple(encoded_input.values())
# Compute token embeddings
with torch.no_grad():
model_output = model(*neuron_inputs)
# Perform pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
# return dictonary, which will be json serializable
return {"vectors": sentence_embeddings[0].tolist()}
```
## 3. Create and upload the neuron model and inference script to Amazon S3
Before we can deploy our neuron model to Amazon SageMaker we need to create a `model.tar.gz` archive with all our model artifacts saved into `tmp/`, e.g. `neuron_model.pt` and upload this to Amazon S3.
To do this we need to set up our permissions.
```python
import sagemaker
import boto3
sess = sagemaker.Session()
# sagemaker session bucket -> used for uploading data, models and logs
# sagemaker will automatically create this bucket if it not exists
sagemaker_session_bucket=None
if sagemaker_session_bucket is None and sess is not None:
# set to default bucket if a bucket name is not given
sagemaker_session_bucket = sess.default_bucket()
try:
role = sagemaker.get_execution_role()
except ValueError:
iam = boto3.client('iam')
role = iam.get_role(RoleName='sagemaker_execution_role')['Role']['Arn']
sess = sagemaker.Session(default_bucket=sagemaker_session_bucket)
print(f"sagemaker role arn: {role}")
print(f"sagemaker bucket: {sess.default_bucket()}")
print(f"sagemaker session region: {sess.boto_region_name}")
```
Next, we create our `model.tar.gz`.The `inference.py` script will be placed into a `code/` folder.
```python
# copy inference.py into the code/ directory of the model directory.
!cp -r code/ tmp/code/
# create a model.tar.gz archive with all the model artifacts and the inference.py script.
%cd tmp
!tar zcvf model.tar.gz *
%cd ..
```
Now we can upload our `model.tar.gz` to our session S3 bucket with `sagemaker`.
```python
from sagemaker.s3 import S3Uploader
# create s3 uri
s3_model_path = f"s3://{sess.default_bucket()}/neuron/{model_id}"
# upload model.tar.gz
s3_model_uri = S3Uploader.upload(local_path="tmp/model.tar.gz",desired_s3_uri=s3_model_path)
print(f"model artifcats uploaded to {s3_model_uri}")
```
## 4. Deploy a Real-time Inference Endpoint on Amazon SageMaker
After we have uploaded our `model.tar.gz` to Amazon S3 can we create a custom `HuggingfaceModel`. This class will be used to create and deploy our real-time inference endpoint on Amazon SageMaker.
```python
from sagemaker.huggingface.model import HuggingFaceModel
# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
model_data=s3_model_uri, # path to your model and script
role=role, # iam role with permissions to create an Endpoint
transformers_version="4.12", # transformers version used
pytorch_version="1.9", # pytorch version used
py_version='py37', # python version used
)
# Let SageMaker know that we've already compiled the model via neuron-cc
huggingface_model._is_compiled_model = True
# deploy the endpoint endpoint
predictor = huggingface_model.deploy(
initial_instance_count=1, # number of instances
instance_type="ml.inf1.xlarge" # AWS Inferentia Instance
)
```
## 5. Run and evaluate Inference performance of BERT on Inferentia
The `.deploy()` returns an `HuggingFacePredictor` object which can be used to request inference.
```python
data = {
"inputs": "the mesmerizing performances of the leads keep the film grounded and keep the audience riveted .",
}
res = predictor.predict(data=data)
res
```
We managed to deploy our neuron compiled BERT to AWS Inferentia on Amazon SageMaker. Now, let's test its performance of it. As a dummy load test will we loop and send 10000 synchronous requests to our endpoint.
```python
# send 10000 requests
for i in range(10000):
resp = predictor.predict(
data={"inputs": "it 's a charming and often affecting journey ."}
)
```
Let's inspect the performance in cloudwatch.
```python
print(f"https://console.aws.amazon.com/cloudwatch/home?region={sess.boto_region_name}#metricsV2:graph=~(metrics~(~(~'AWS*2fSageMaker~'ModelLatency~'EndpointName~'{predictor.endpoint_name}~'VariantName~'AllTraffic))~view~'timeSeries~stacked~false~region~'{sess.boto_region_name}~start~'-PT5M~end~'P0D~stat~'Average~period~30);query=~'*7bAWS*2fSageMaker*2cEndpointName*2cVariantName*7d*20{predictor.endpoint_name}")
```
The average latency for our MiniLM model is `3-4.5ms` for a sequence length of 128.
![performance](/static/blog/huggingface-sentence-transformers-aws-inferentia/performance.png)
### **Delete model and endpoint**
To clean up, we can delete the model and endpoint.
```python
predictor.delete_model()
predictor.delete_endpoint()
```
## Conclusion
We successfully managed to compile a Sentence Transformer to an AWS Inferentia compatible Neuron Model. After that we deployed our Neuron model to Amazon SageMaker using the new Hugging Face Inference DLC. We managed to achieve `3-4.5ms` latency per neuron core, which is faster than CPU in terms of latency, and achieves a higher throughput than GPUs since we ran 4 models in parallel. We can achieve an throughput of up to _1000 documents per second_ with a 4ms latency on a 128 sequence length per `inf1.xlarge` instance, which costs around ~200$ per month.
If you or you company are currently using a Sentence Transformers for semantic search tasks (document-emebddings, sentence-embeddings, ranking), and 3-4.5ms latency meets your requirements you should switch to AWS Inferentia. This will not only save costs, but can also increase efficiency and performance for your models.
We are planning to do a more detailed case study on cost-performance of transformers in the future, so stay tuned!
Also if you want to learn more about accelerating transformers you should also check out Hugging Face [optimum](https://github.com/huggingface/optimum).
---
Thanks for reading! If you have any questions, feel free to contact me, through [Github](https://github.com/huggingface/transformers), or on the [forum](https://discuss.huggingface.co/c/sagemaker/17). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Document AI: Fine-tuning Donut for document-parsing using Hugging Face Transformers | https://www.philschmid.de/fine-tuning-donut | 2022-09-06 | [
"DocumentAI",
"HuggingFace",
"Transformers",
"Donut"
] | Learn how to fine-tune Donut-base for document-understand/document-parsing using Hugging Face Transformers. Donut is a new document-understanding model achieving state-of-art performance and can be used for commercial applications. | In this blog, you will learn how to fine-tune [Donut-base](https://huggingface.co/naver-clova-ix/donut-base) for document-understand/document-parsing using Hugging Face Transformers. Donut is a new document-understanding model achieving state-of-art performance with an MIT-license, which allows it to be used for commercial purposes compared to other models like LayoutLMv2/LayoutLMv3.
We are going to use all of the great features from the Hugging Face ecosystem, like model versioning and experiment tracking.
We will use the [SROIE](https://github.com/zzzDavid/ICDAR-2019-SROIE) dataset a collection of 1000 scanned receipts, including their OCR. More information for the dataset can be found at the [repository](https://github.com/zzzDavid/ICDAR-2019-SROIE).
You will learn how to:
1. [Setup Development Environment](#1-setup-development-environment)
2. [Load SROIE dataset](#2-load-sroie-dataset)
3. [Prepare dataset for Donut](#3-prepare-dataset-for-donut)
4. [Fine-tune and evaluate Donut model](#4-fine-tune-and-evaluate-donut-model)
Before we can start, make sure you have a [Hugging Face Account](https://huggingface.co/join) to save artifacts and experiments.
## Quick intro: Document Understanding Transformer (Donut) by ClovaAI
Document Understanding Transformer (Donut) is a new Transformer model for OCR-free document understanding. It doesn't require an OCR engine to process scanned documents but is achieving state-of-the-art performances on various visual document understanding tasks, such as visual document classification or information extraction (a.k.a. document parsing).
Donut is a multimodal sequence-to-sequence model with a vision encoder ([Swin Transformer](https://huggingface.co/docs/transformers/v4.21.2/en/model_doc/swin#overview)) and text decoder ([BART](https://huggingface.co/docs/transformers/v4.21.2/en/model_doc/bart)). The encoder receives the images and computes it into an embedding, which is then passed to the decoder, which generates a sequence of tokens.
![donut](/static/blog/fine-tuning-donut/donut.png)
- Paper: https://arxiv.org/abs/2111.15664
- Official repo: https://github.com/clovaai/donut
---
Now we know how Donut works, so let's get started. 🚀
_Note: This tutorial was created and run on a p3.2xlarge AWS EC2 Instance including a NVIDIA V100._
## 1. Setup Development Environment
Our first step is to install the Hugging Face Libraries, including transformers and datasets. Running the following cell will install all the required packages.
_Note: At the time of writing this Donut is not yet included in the PyPi version of Transformers, so we need it to install from the `main` branch. Donut will be added in version `4.22.0`._
```python
!pip install -q git+https://github.com/huggingface/transformers.git
# !pip install -q "transformers>=4.22.0" # comment in when version is released
!pip install -q datasets sentencepiece tensorboard
# install git-fls for pushing model and logs to the hugging face hub
!sudo apt-get install git-lfs --yes
```
This example will use the [Hugging Face Hub](https://huggingface.co/models) as a remote model versioning service. To be able to push our model to the Hub, you need to register on the [Hugging Face](https://huggingface.co/join).
If you already have an account, you can skip this step.
After you have an account, we will use the `notebook_login` util from the `huggingface_hub` package to log into our account and store our token (access key) on the disk.
```python
from huggingface_hub import notebook_login
notebook_login()
```
## 2. Load SROIE dataset
We will use the [SROIE](https://github.com/zzzDavid/ICDAR-2019-SROIE) dataset a collection of 1000 scanned receipts including their OCR, more specifically we will use the dataset from task 2 "Scanned Receipt OCR". The available dataset on Hugging Face ([darentang/sroie](https://huggingface.co/datasets/darentang/sroie)) is not compatible with Donut. Thats why we will use the original dataset together with the `imagefolder` feature of `datasets` to load our dataset. Learn more about loading image data [here](https://huggingface.co/docs/datasets/v2.4.0/en/image_load#load-image-data).
_Note: The test data for task2 is sadly not available. Meaning that we end up only with 624 images._
First, we will clone the repository, extract the dataset into a separate folder and remove the unnecessary files.
```bash
%%bash
# clone repository
git clone https://github.com/zzzDavid/ICDAR-2019-SROIE.git
# copy data
cp -r ICDAR-2019-SROIE/data ./
# clean up
rm -rf ICDAR-2019-SROIE
rm -rf data/box
```
Now we have two folders inside the `data/` directory. One contains the images of the receipts and the other contains the OCR text. The nex step is to create a `metadata.json` file that contains the information about the images including the OCR-text. This is necessary for the `imagefolder` feature of `datasets`.
The `metadata.json` should look at the end similar to the example below.
```json
{"file_name": "0001.png", "text": "This is a golden retriever playing with a ball"}
{"file_name": "0002.png", "text": "A german shepherd"}
```
In our example will `"text"` column contain the OCR text of the image, which will later be used for creating the Donut specific format.
```python
import os
import json
from pathlib import Path
import shutil
# define paths
base_path = Path("data")
metadata_path = base_path.joinpath("key")
image_path = base_path.joinpath("img")
# define metadata list
metadata_list = []
# parse metadata
for file_name in metadata_path.glob("*.json"):
with open(file_name, "r") as json_file:
# load json file
data = json.load(json_file)
# create "text" column with json string
text = json.dumps(data)
# add to metadata list if image exists
if image_path.joinpath(f"{file_name.stem}.jpg").is_file():
metadata_list.append({"text":text,"file_name":f"{file_name.stem}.jpg"})
# delete json file
# write jsonline file
with open(image_path.joinpath('metadata.jsonl'), 'w') as outfile:
for entry in metadata_list:
json.dump(entry, outfile)
outfile.write('\n')
# remove old meta data
shutil.rmtree(metadata_path)
```
Good Job! Now we can load the dataset using the `imagefolder` feature of `datasets`.
```python
import os
import json
from pathlib import Path
import shutil
from datasets import load_dataset
# define paths
base_path = Path("data")
metadata_path = base_path.joinpath("key")
image_path = base_path.joinpath("img")
# Load dataset
dataset = load_dataset("imagefolder", data_dir=image_path, split="train")
print(f"Dataset has {len(dataset)} images")
print(f"Dataset features are: {dataset.features.keys()}")
```
Now, lets take a closer look at our dataset
```python
import random
random_sample = random.randint(0, len(dataset))
print(f"Random sample is {random_sample}")
print(f"OCR text is {dataset[random_sample]['text']}")
dataset[random_sample]['image'].resize((250,400))
# OCR text is {"company": "LIM SENG THO HARDWARE TRADING", "date": "29/12/2017", "address": "NO 7, SIMPANG OFF BATU VILLAGE, JALAN IPOH BATU 5, 51200 KUALA LUMPUR MALAYSIA", "total": "6.00"}
```
![png](/static/blog/fine-tuning-donut/donut_sroie_14_1.png)
## 3. Prepare dataset for Donut
As we learned in the introduction, Donut is a sequence-to-sequence model with a vision encoder and text decoder. When fine-tuning the model we want it to generate the `"text"` based on the image we pass it. Similar to NLP tasks, we have to tokenize and preprocess the text.
Before we can tokenize the text, we need to transform the JSON string into a Donut compatible document.
**current JSON string**
```json
{
"company": "ADVANCO COMPANY",
"date": "17/01/2018",
"address": "NO 1&3, JALAN WANGSA DELIMA 12, WANGSA LINK, WANGSA MAJU, 53300 KUALA LUMPUR",
"total": "7.00"
}
```
**Donut document**
```json
<s></s><s_company>ADVANCO COMPANY</s_company><s_date>17/01/2018</s_date><s_address>NO 1&3, JALAN WANGSA DELIMA 12, WANGSA LINK, WANGSA MAJU, 53300 KUALA LUMPUR</s_address><s_total>7.00</s_total></s>
```
To easily create those documents the ClovaAI team has created a [json2token](https://github.com/clovaai/donut/blob/master/donut/model.py#L497) method, which we extract and then apply.
```python
new_special_tokens = [] # new tokens which will be added to the tokenizer
task_start_token = "<s>" # start of task token
eos_token = "</s>" # eos token of tokenizer
def json2token(obj, update_special_tokens_for_json_key: bool = True, sort_json_key: bool = True):
"""
Convert an ordered JSON object into a token sequence
"""
if type(obj) == dict:
if len(obj) == 1 and "text_sequence" in obj:
return obj["text_sequence"]
else:
output = ""
if sort_json_key:
keys = sorted(obj.keys(), reverse=True)
else:
keys = obj.keys()
for k in keys:
if update_special_tokens_for_json_key:
new_special_tokens.append(fr"<s_{k}>") if fr"<s_{k}>" not in new_special_tokens else None
new_special_tokens.append(fr"</s_{k}>") if fr"</s_{k}>" not in new_special_tokens else None
output += (
fr"<s_{k}>"
+ json2token(obj[k], update_special_tokens_for_json_key, sort_json_key)
+ fr"</s_{k}>"
)
return output
elif type(obj) == list:
return r"<sep/>".join(
[json2token(item, update_special_tokens_for_json_key, sort_json_key) for item in obj]
)
else:
# excluded special tokens for now
obj = str(obj)
if f"<{obj}/>" in new_special_tokens:
obj = f"<{obj}/>" # for categorical special tokens
return obj
def preprocess_documents_for_donut(sample):
# create Donut-style input
text = json.loads(sample["text"])
d_doc = task_start_token + json2token(text) + eos_token
# convert all images to RGB
image = sample["image"].convert('RGB')
return {"image": image, "text": d_doc}
proc_dataset = dataset.map(preprocess_documents_for_donut)
print(f"Sample: {proc_dataset[45]['text']}")
print(f"New special tokens: {new_special_tokens + [task_start_token] + [eos_token]}")
# Sample: <s><s_total>$6.90</s_total><s_date>27 MAR 2018</s_date><s_company>UNIHAKKA INTERNATIONAL SDN BHD</s_company><s_address>12, JALAN TAMPOI 7/4,KAWASAN PARINDUSTRIAN TAMPOI,81200 JOHOR BAHRU,JOHOR</s_address></s>
# New special tokens: ['<s_total>', '</s_total>', '<s_date>', '</s_date>', '<s_company>', '</s_company>', '<s_address>', '</s_address>', '<s>', '</s>']
```
The next step is to tokenize our text and encode the images into tensors. Therefore we need to load `DonutProcessor`, add our new special tokens and adjust the size of the images when processing from `[1920, 2560]` to `[720, 960]` to need less memory and have faster training.
```python
from transformers import DonutProcessor
# Load processor
processor = DonutProcessor.from_pretrained("naver-clova-ix/donut-base")
# add new special tokens to tokenizer
processor.tokenizer.add_special_tokens({"additional_special_tokens": new_special_tokens + [task_start_token] + [eos_token]})
# we update some settings which differ from pretraining; namely the size of the images + no rotation required
# resizing the image to smaller sizes from [1920, 2560] to [960,1280]
processor.feature_extractor.size = [720,960] # should be (width, height)
processor.feature_extractor.do_align_long_axis = False
```
Now, we can prepare our dataset, which we will use for the training later.
```python
def transform_and_tokenize(sample, processor=processor, split="train", max_length=512, ignore_id=-100):
# create tensor from image
try:
pixel_values = processor(
sample["image"], random_padding=split == "train", return_tensors="pt"
).pixel_values.squeeze()
except Exception as e:
print(sample)
print(f"Error: {e}")
return {}
# tokenize document
input_ids = processor.tokenizer(
sample["text"],
add_special_tokens=False,
max_length=max_length,
padding="max_length",
truncation=True,
return_tensors="pt",
)["input_ids"].squeeze(0)
labels = input_ids.clone()
labels[labels == processor.tokenizer.pad_token_id] = ignore_id # model doesn't need to predict pad token
return {"pixel_values": pixel_values, "labels": labels, "target_sequence": sample["text"]}
# need at least 32-64GB of RAM to run this
processed_dataset = proc_dataset.map(transform_and_tokenize,remove_columns=["image","text"])
```
```python
# from datasets import load_from_disk
# from transformers import DonutProcessor
## COMMENT IN in case you want to save the processed dataset to disk in case of error later
# processed_dataset.save_to_disk("processed_dataset")
# processor.save_pretrained("processor")
## COMMENT IN in case you want to load the processed dataset from disk in case of error later
# processed_dataset = load_from_disk("processed_dataset")
# processor = DonutProcessor.from_pretrained("processor")
```
The last step is to split the dataset into train and validation sets.
```python
processed_dataset = processed_dataset.train_test_split(test_size=0.1)
print(processed_dataset)
```
## 4. Fine-tune and evaluate Donut model
After we have processed our dataset, we can start training our model. Therefore we first need to load the [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) model with the `VisionEncoderDecoderModel` class. The `donut-base` includes only the pre-trained weights and was introduced in the paper [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) by Geewok et al. and first released in [this repository](https://github.com/clovaai/donut).
In addition to loading our model, we are resizing the `embedding` layer to match newly added tokens and adjusting the `image_size` of our encoder to match our dataset. We are also adding tokens for inference later.
```python
import torch
from transformers import VisionEncoderDecoderModel, VisionEncoderDecoderConfig
# Load model from huggingface.co
model = VisionEncoderDecoderModel.from_pretrained("naver-clova-ix/donut-base")
# Resize embedding layer to match vocabulary size
new_emb = model.decoder.resize_token_embeddings(len(processor.tokenizer))
print(f"New embedding size: {new_emb}")
# Adjust our image size and output sequence lengths
model.config.encoder.image_size = processor.feature_extractor.size[::-1] # (height, width)
model.config.decoder.max_length = len(max(processed_dataset["train"]["labels"], key=len))
# Add task token for decoder to start
model.config.pad_token_id = processor.tokenizer.pad_token_id
model.config.decoder_start_token_id = processor.tokenizer.convert_tokens_to_ids(['<s>'])[0]
# is done by Trainer
# device = "cuda" if torch.cuda.is_available() else "cpu"
# model.to(device)
```
Before we can start our training we need to define the hyperparameters (`Seq2SeqTrainingArguments`) we want to use for our training. We are leveraging the [Hugging Face Hub](https://huggingface.co/models) integration of the `Seq2SeqTrainer` to automatically push our checkpoints, logs and metrics during training into a repository.
```python
from huggingface_hub import HfFolder
from transformers import Seq2SeqTrainingArguments, Seq2SeqTrainer
# hyperparameters used for multiple args
hf_repository_id = "donut-base-sroie"
# Arguments for training
training_args = Seq2SeqTrainingArguments(
output_dir=hf_repository_id,
num_train_epochs=3,
learning_rate=2e-5,
per_device_train_batch_size=2,
weight_decay=0.01,
fp16=True,
logging_steps=100,
save_total_limit=2,
evaluation_strategy="no",
save_strategy="epoch",
predict_with_generate=True,
# push to hub parameters
report_to="tensorboard",
push_to_hub=True,
hub_strategy="every_save",
hub_model_id=hf_repository_id,
hub_token=HfFolder.get_token(),
)
# Create Trainer
trainer = Seq2SeqTrainer(
model=model,
args=training_args,
train_dataset=processed_dataset["train"],
)
```
We can start our training by using the `train` method of the `Seq2SeqTrainer`.
```python
# Start training
trainer.train()
```
After our training is done we also want to save our processor to the Hugging Face Hub and create a model card.
```python
# Save processor and create model card
processor.save_pretrained(hf_repository_id)
trainer.create_model_card()
trainer.push_to_hub()
```
We sucessfully trainied our model now lets test it and then evaulate accuracy of it.
```python
import re
import transformers
from PIL import Image
from transformers import DonutProcessor, VisionEncoderDecoderModel
import torch
import random
import numpy as np
# hidde logs
transformers.logging.disable_default_handler()
# Load our model from Hugging Face
processor = DonutProcessor.from_pretrained("philschmid/donut-base-sroie")
model = VisionEncoderDecoderModel.from_pretrained("philschmid/donut-base-sroie")
# Move model to GPU
device = "cuda" if torch.cuda.is_available() else "cpu"
model.to(device)
# Load random document image from the test set
test_sample = processed_dataset["test"][random.randint(1, 50)]
def run_prediction(sample, model=model, processor=processor):
# prepare inputs
pixel_values = torch.tensor(test_sample["pixel_values"]).unsqueeze(0)
task_prompt = "<s>"
decoder_input_ids = processor.tokenizer(task_prompt, add_special_tokens=False, return_tensors="pt").input_ids
# run inference
outputs = model.generate(
pixel_values.to(device),
decoder_input_ids=decoder_input_ids.to(device),
max_length=model.decoder.config.max_position_embeddings,
early_stopping=True,
pad_token_id=processor.tokenizer.pad_token_id,
eos_token_id=processor.tokenizer.eos_token_id,
use_cache=True,
num_beams=1,
bad_words_ids=[[processor.tokenizer.unk_token_id]],
return_dict_in_generate=True,
)
# process output
prediction = processor.batch_decode(outputs.sequences)[0]
prediction = processor.token2json(prediction)
# load reference target
target = processor.token2json(test_sample["target_sequence"])
return prediction, target
prediction, target = run_prediction(test_sample)
print(f"Reference:\n {target}")
print(f"Prediction:\n {prediction}")
processor.feature_extractor.to_pil_image(np.array(test_sample["pixel_values"])).resize((350,600))
```
Result
```bash
Reference:
{'total': '9.30', 'date': '26/11/2017', 'company': 'SANYU STATIONERY SHOP', 'address': 'NO. 31G&33G, JALAN SETIA INDAH X ,U13/X 40170 SETIA ALAM'}
Prediction:
{'total': '9.30', 'date': '26/11/2017', 'company': 'SANYU STATIONERY SHOP', 'address': 'NO. 31G&33G, JALAN SETIA INDAH X,U13/X 40170 SETIA ALAM'}
```
![png](/static/blog/fine-tuning-donut/donut_sroie_33_1.png)
Nice 😍🔥 Our fine-tuned parsed the document correctly and extracted the right values. Our next step is to evalute our model on the test set. Since the model itself is a seq2seq is not that straightforward to evaluate.
To keep things simple we will use accuracy as metric and compare the predicted value for each key in the dictionary to see if they are equal. This evaluation technique is biased/simple sincne only exact matches are truthy, e.g. if the model is not detecting a "whitespace" as in the example above it will not be counted truthy.
```python
from tqdm import tqdm
# define counter for samples
true_counter = 0
total_counter = 0
# iterate over dataset
for sample in tqdm(processed_dataset["test"]):
prediction, target = run_prediction(test_sample)
for s in zip(prediction.values(), target.values()):
if s[0] == s[1]:
true_counter += 1
total_counter += 1
print(f"Accuracy: {(true_counter/total_counter)*100}%")
# Accuracy: 75.0%
```
Our model achieves an accuracy of `75%` on the test set.
_Note: The evaluation we did was very simple and only valued exact string matches as "truthy" for each key of the dictonary, is a big bias for the evaluation. Meaning that a accuracy of `75%` is pretty good._
Our first inference test is an excellent example of why this metric is biased. There the model predicted for the `address` the value `NO. 31G&33G, JALAN SETIA INDAH X ,U13/X 40170 SETIA ALAM` and the ground truth was `'NO. 31G&33G, JALAN SETIA INDAH X,U13/X 40170 SETIA ALAM'`, where the only difference is the ` ` whitespace in between `X` and `,U13/X`.
In our evaluation loop, this was not counted as a truthy value.
---
Thanks for reading! If you have any questions, feel free to contact me, through [Github](https://github.com/huggingface/transformers), or on the [forum](https://discuss.huggingface.co/c/optimum/59). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
A remote guide to re:Invent 2021 machine learning sessions | https://www.philschmid.de/re-invent-2021 | 2021-11-11 | [
"AWS",
"ReInvent"
] | If you are like me you are not from the USA and cannot easily travel to Las Vegas. I have the perfect remote guide for your perfect virtual re:Invent 2021 focused on NLP and Machine Learning. | After last year's amazing fully virtual event and improvement in the current life situation, Amazonians can finally gather back to Vegas for a somewhat traditional re:Invent. But this year will be the second year that re:Invent has a large virtual/remote part.
If you are like me you are not from the USA and cannot easily travel to Las Vegas. I have the perfect remote guide for your perfect virtual re:Invent 2021 focused on NLP and Machine Learning.
And then maybe next year we will see each other in Las Vegas. 🤞🏻
If you haven't yet registered for free do it now at [https://reinvent.awsevents.com/register/](https://reinvent.awsevents.com/register/)
## Guide
Let's start with the obvious ones. Key Note and Key Note. It will be the first year after [Andy Jassy](https://de.wikipedia.org/wiki/Andy_Jassy) became Amazon CEO and is not doing the opening key anymore. This year it is [Adam Selipsky](https://www.linkedin.com/in/adamselipsky) turn.
I can really recommend both keynotes to everyone, whether machine learning-oriented or not. They give an update on all the new developments, services, solutions, achievements, and much more.
## [[ARS202]](https://reinvent.awsevents.com/keynotes/) **AWS re:Invent keynote live stream: CEO**
### Tuesday, November 30; 8:30 AM - 10:00 AM PDT; 5:30 PM - 7:00 PM CET;
Adam Selipsky, AWS CEO, takes the stage to share his insights and the latest news about AWS customers, products, and services.
## [[ARS203]](https://reinvent.awsevents.com/keynotes/) **AWS re:Invent keynote live stream: Machine learning**
### Wednesday, December 1; 8:30 AM - 10:00 AM PDT; 5:30 PM - 7:00 PM CET;
Join Swami Sivasubramanian, Vice President, Amazon Machine Learning, on an exploration of what it takes to put data in action with an end-to-end data strategy including the latest news on databases, analytics, and machine learning.
---
Now let's get into my top 5 remote available Sessions.
**\*Quick Note**: Sadly workshops, builders' sessions, Chalk Talks will only be in-person unfortunately, and will not be recorded\*
## [[AIM328]](https://www.google.com/calendar/render?action=TEMPLATE&text=AIM328%2509Automatically%2520scale%2520Amazon%2520SageMaker%2520endpoints%2520for%2520inference&location=Level%25203%252C%2520Lido%25203006%252C%2520The%2520Venetian&details=Many%2520customers%2520organizations%2520have%2520ML%2520applications%2520with%2520intermittent%2520usage%2520patterns.%2520As%2520a%2520result%252C%2520customers%2520they%2520end%2520up%2520provisioning%2520for%2520peak%2520capacity%2520up%2520front%252C%2520which%2520results%2520in%2520idle%2520capacity.%2520In%2520this%2520session%252C%2520learn%2520how%2520to%2520use%2520Amazon%2520SageMaker%2520to%2520reduce%2520costs%2520for%2520intermittent%2520workloads%2520and%2520scale%2520automatically%2520based%2520on%2520your%2520needs.&dates=20211202T231500Z%252F20211203T001500Z) **Automatically scale Amazon SageMaker endpoints for inference**
### Thursday, December 2; 3:15 PM - 4:15 PM; 12:15 AM - 1:15 AM CET;
Many customers organizations have ML applications with intermittent usage patterns. As a result, customers they end up provisioning for peak capacity up front, which results in idle capacity. In this session, learn how to use Amazon SageMaker to reduce costs for intermittent workloads and scale automatically based on your needs.
## [[STP219]](https://portal.awsevents.com/events/reInvent2021/dashboard/event/sessions) **DayTwo redefines microbiome analysis using AWS frameworks**
### Tuesday, November 30; 11:00 AM - 11:50 AM; 8:00 PM - 8:50 PM CET;
In this session, learn how DayTwo used AWS to create the world’s largest, highest-resolution microbiome genomic analysis pipeline. Discover how this enabled data scientists to mine 300 terabytes of genomic datasets, revealing completely new biomarkers and unknown links between the gut microbiome and human health. Then, find out how DayTwo used AWS genomic analysis Quick Starts to build a process workflow on the Nextflow open-source framework and how the company processed huge segments of data to reveal new insights. Finally, hear how DayTwo uses Amazon SageMaker to train predictive machine learning models for early detection of health conditions.
## [[LFS305]](https://portal.awsevents.com/events/reInvent2021/dashboard/event/sessions) **Delivering life-changing medicines at AstraZeneca with data and AI**
### Tuesday, November 30; 12:30 PM - 1:30 PM; 9:30 PM - 10:30 PM CET;
Join this session to learn how AstraZeneca is driving insights at scale and putting the power of artificial intelligence (AI) in the hands of employees by enabling self-service capabilities on AWS, helping them deliver life-changing medicines. Discover how AstraZeneca has implemented an AI-driven drug discovery platform to increase quality and reduce the time it takes to discover a potential drug candidate. Learn how AstraZeneca hosts predictive machine learning models, generative AI, a global analytical database, and molecule search capabilities, and how it uses services such as Amazon EKS, Amazon ES, Amazon Kinesis, Amazon Aurora PostgreSQL, and open-source tools to build and optimize its platform on AWS.
## [[AIM417]](https://portal.awsevents.com/events/reInvent2021/dashboard/event/sessions) **Easily deploy models for the best performance & cost using Amazon SageMaker**
### Wednesday, December 1; 5:30 PM - 6:30 PM; 2:30 AM - 3:30 AM CET;
Optimizing cloud resources to achieve the best cost and performance for your ML model is critical. In this session, learn how to use Amazon SageMaker to run performance benchmarks and load tests for inference to determine the right instance types and model optimizations.
## [[AIM320]](https://portal.awsevents.com/events/reInvent2021/dashboard/event/sessions) **Implementing MLOps practices with Amazon SageMaker, featuring Vanguard**
### Thursday, December 2; 2:30 PM - 3:30 PM; 11:30 PM - 12:30 AM CET;
Implementing MLOps practices helps data scientists and operations engineers collaborate to prepare, build, train, deploy, and manage models at scale. During this session, explore the breadth of MLOps features in Amazon SageMaker that help you provision consistent model development environments, automate ML workflows, implement CI/CD pipelines for ML, monitor models in production, and standardize model governance capabilities. Then, hear from Vanguard as they share their journey enabling MLOps to achieve ML at scale for their polyglot model development platforms using Amazon SageMaker features, including SageMaker projects, SageMaker Pipelines, SageMaker Model Registry, and SageMaker Model Monitor.
## Bonus Tip:
## [[OPN320]](https://portal.awsevents.com/events/reInvent2021/dashboard/event/sessions) **Using Rust to minimize environmental impact**
### Monday, November 29; 11:30 AM - 12:30 PM; 8:30 PM - 9:30 AM CET;
Rust is one of the most energy-efficient and safe programming languages. With Rust, it may be possible to reduce the environmental impact of the IT industry by 50% and prevent 70% of all high-severity CVEs. In this session, dive into the "superpowers" of Rust, hear about the work ahead to give those powers to every engineer and hear the ways you can contribute.
## Conclusions
I know being remote for an on-side conference is not easy but AWS is doing a great job to give us access to great sessions. It will be a great conference for all, continuously raising the bar in the cloud and machine learning domain.
Have fun and enjoy the conference. And never stop building great things!
Don't forget to sign up at [https://reinvent.awsevents.com/register/](https://reinvent.awsevents.com/register/)
---
Thanks for reading. If you have any questions, feel free to contact me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Pre-Training BERT with Hugging Face Transformers and Habana Gaudi | https://www.philschmid.de/pre-training-bert-habana | 2022-08-24 | [
"BERT",
"Habana",
"HuggingFace",
"Optimum"
] | Learn how to pre-traing BERT from scratch using Hugging Face Transformers and Habana Gaudi. | In this Tutorial, you will learn how to pre-train [BERT-base](https://huggingface.co/bert-base-uncased) from scratch using a Habana Gaudi-based [DL1 instance](https://aws.amazon.com/ec2/instance-types/dl1/) on AWS to take advantage of the cost-performance benefits of Gaudi. We will use the Hugging Face [Transformers](https://huggingface.co/docs/transformers), [Optimum Habana](https://huggingface.co/docs/optimum/main/en/habana_index) and [Datasets](https://huggingface.co/docs/datasets) libraries to pre-train a BERT-base model using masked-language modeling, one of the two original BERT pre-training tasks. Before we get started, we need to set up the deep learning environment.
You will learn how to:
1. [Prepare the dataset](#1-prepare-the-dataset)
2. [Train a Tokenizer](#2-train-a-tokenizer)
3. [Preprocess the dataset](#3-preprocess-the-dataset)
4. [Pre-train BERT on Habana Gaudi](#4-pre-train-bert-on-habana-gaudi)
_Note: Steps 1 to 3 can/should be run on a different instance size since those are CPU intensive tasks._
![architecture overview](/static/blog/pre-training-bert-habana/pre-training.png)
**Requirements**
Before we start, make sure you have met the following requirements
- AWS Account with quota for [DL1 instance type](https://aws.amazon.com/ec2/instance-types/dl1/)
- [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) installed
- AWS IAM user [configured in CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html) with permission to create and manage ec2 instances
**Helpful Resources**
- [Setup Deep Learning environment for Hugging Face Transformers with Habana Gaudi on AWS](https://www.philschmid.de/getting-started-habana-gaudi)
- [Deep Learning setup made easy with EC2 Remote Runner and Habana Gaudi](https://www.philschmid.de/habana-gaudi-ec2-runner)
- [Optimum Habana Documentation](https://huggingface.co/docs/optimum/main/en/habana_index)
- [Pre-training script](./scripts/run_mlm.py)
- [Code: pre-training-bert.ipynb](https://github.com/philschmid/deep-learning-habana-huggingface/blob/master/pre-training/pre-training-bert.ipynb)
## What is BERT?
BERT, short for Bidirectional Encoder Representations from Transformers, is a Machine Learning (ML) model for natural language processing. It was developed in 2018 by researchers at Google AI Language and serves as a swiss army knife solution to 11+ of the most common language tasks, such as sentiment analysis and named entity recognition.
Read more about BERT in our [BERT 101 🤗 State Of The Art NLP Model Explained](https://huggingface.co/blog/bert-101) blog.
## What is a Masked Language Modeling (MLM)?
MLM enables/enforces bidirectional learning from text by masking (hiding) a word in a sentence and forcing BERT to bidirectionally use the words on either side of the covered word to predict the masked word.
**Masked Language Modeling Example:**
```bash
“Dang! I’m out fishing and a huge trout just [MASK] my line!”
```
Read more about Masked Language Modeling [here](https://huggingface.co/blog/bert-101).
---
Let's get started. 🚀
_Note: Steps 1 to 3 were run on a AWS c6i.12xlarge instance._
## 1. Prepare the dataset
The Tutorial is "split" into two parts. The first part (step 1-3) is about preparing the dataset and tokenizer. The second part (step 4) is about pre-training BERT on the prepared dataset. Before we can start with the dataset preparation we need to setup our development environment. As mentioned in the introduction you don't need to prepare the dataset on the DL1 instance and could use your notebook or desktop computer.
At first we are going to install `transformers`, `datasets` and `git-lfs` to push our tokenizer and dataset to the [Hugging Face Hub](https://huggingface.co) for later use.
```python
!pip install transformers datasets
!sudo apt-get install git-lfs
```
To finish our setup let's log into the [Hugging Face Hub](https://huggingface.co/models) to push our dataset, tokenizer, model artifacts, logs and metrics during training and afterwards to the Hub.
_To be able to push our model to the Hub, you need to register on the [Hugging Face Hub](https://huggingface.co/join)._
We will use the `notebook_login` util from the `huggingface_hub` package to log into our account. You can get your token in the settings at [Access Tokens](https://huggingface.co/settings/tokens).
```python
from huggingface_hub import notebook_login
notebook_login()
```
Since we are now logged in let's get the `user_id`, which will be used to push the artifacts.
```python
from huggingface_hub import HfApi
user_id = HfApi().whoami()["name"]
print(f"user id '{user_id}' will be used during the example")
```
The [original BERT](https://arxiv.org/abs/1810.04805) was pretrained on [Wikipedia](https://huggingface.co/datasets/wikipedia) and [BookCorpus](https://huggingface.co/datasets/bookcorpus) datasets. Both datasets are available on the [Hugging Face Hub](https://huggingface.co/datasets) and can be loaded with `datasets`.
_Note: For wikipedia we will use the `20220301`, which is different from the original split._
As a first step we are loading the datasets and merging them together to create on big dataset.
```python
from datasets import concatenate_datasets, load_dataset
bookcorpus = load_dataset("bookcorpus", split="train")
wiki = load_dataset("wikipedia", "20220301.en", split="train")
wiki = wiki.remove_columns([col for col in wiki.column_names if col != "text"]) # only keep the 'text' column
assert bookcorpus.features.type == wiki.features.type
raw_datasets = concatenate_datasets([bookcorpus, wiki])
```
_We are not going to do some advanced dataset preparation, like de-duplication, filtering or any other pre-processing. If you are planning to apply this notebook to train your own BERT model from scratch I highly recommend including those data preparation steps into your workflow. This will help you improve your Language Model._
## 2. Train a Tokenizer
To be able to train our model we need to convert our text into a tokenized format. Most Transformer models are coming with a pre-trained tokenizer, but since we are pre-training our model from scratch we also need to train a Tokenizer on our data. We can train a tokenizer on our data with `transformers` and the `BertTokenizerFast` class.
More information about training a new tokenizer can be found in our [Hugging Face Course](https://huggingface.co/course/chapter6/2?fw=pt).
```python
from tqdm import tqdm
from transformers import BertTokenizerFast
# repositor id for saving the tokenizer
tokenizer_id="bert-base-uncased-2022-habana"
# create a python generator to dynamically load the data
def batch_iterator(batch_size=10000):
for i in tqdm(range(0, len(raw_datasets), batch_size)):
yield raw_datasets[i : i + batch_size]["text"]
# create a tokenizer from existing one to re-use special tokens
tokenizer = BertTokenizerFast.from_pretrained("bert-base-uncased")
```
We can start training the tokenizer with `train_new_from_iterator()`.
```python
bert_tokenizer = tokenizer.train_new_from_iterator(text_iterator=batch_iterator(), vocab_size=32_000)
bert_tokenizer.save_pretrained("tokenizer")
```
We push the tokenizer to the [Hugging Face Hub](https://huggingface.co/models) for later training our model.
```python
# you need to be logged in to push the tokenizer
bert_tokenizer.push_to_hub(tokenizer_id)
```
## 3. Preprocess the dataset
Before we can get started with training our model, the last step is to pre-process/tokenize our dataset. We will use our trained tokenizer to tokenize our dataset and then push it to the hub to load it easily later in our training. The tokenization process is also kept pretty simple, if documents are longer than `512` tokens those are truncated and not split into several documents.
```python
from transformers import AutoTokenizer
import multiprocessing
# load tokenizer
# tokenizer = AutoTokenizer.from_pretrained(f"{user_id}/{tokenizer_id}")
tokenizer = AutoTokenizer.from_pretrained("tokenizer")
num_proc = multiprocessing.cpu_count()
print(f"The max length for the tokenizer is: {tokenizer.model_max_length}")
def group_texts(examples):
tokenized_inputs = tokenizer(
examples["text"], return_special_tokens_mask=True, truncation=True, max_length=tokenizer.model_max_length
)
return tokenized_inputs
# preprocess dataset
tokenized_datasets = raw_datasets.map(group_texts, batched=True, remove_columns=["text"], num_proc=num_proc)
tokenized_datasets.features
```
As data processing function we will concatenate all texts from our dataset and generate chunks of `tokenizer.model_max_length` (512).
```python
from itertools import chain
# Main data processing function that will concatenate all texts from our dataset and generate chunks of
# max_seq_length.
def group_texts(examples):
# Concatenate all texts.
concatenated_examples = {k: list(chain(*examples[k])) for k in examples.keys()}
total_length = len(concatenated_examples[list(examples.keys())[0]])
# We drop the small remainder, we could add padding if the model supported it instead of this drop, you can
# customize this part to your needs.
if total_length >= tokenizer.model_max_length:
total_length = (total_length // tokenizer.model_max_length) * tokenizer.model_max_length
# Split by chunks of max_len.
result = {
k: [t[i : i + tokenizer.model_max_length] for i in range(0, total_length, tokenizer.model_max_length)]
for k, t in concatenated_examples.items()
}
return result
tokenized_datasets = tokenized_datasets.map(group_texts, batched=True, num_proc=num_proc)
# shuffle dataset
tokenized_datasets = tokenized_datasets.shuffle(seed=34)
print(f"the dataset contains in total {len(tokenized_datasets)*tokenizer.model_max_length} tokens")
# the dataset contains in total 3417216000 tokens
```
The last step before we can start with our training is to push our prepared dataset to the hub.
```python
# push dataset to hugging face
dataset_id=f"{user_id}/processed_bert_dataset"
tokenized_datasets.push_to_hub(f"{user_id}/processed_bert_dataset")
```
## 4. Pre-train BERT on Habana Gaudi
In this example, we are going to use Habana Gaudi on AWS using the DL1 instance to run the pre-training. We will use the [Remote Runner](https://github.com/philschmid/deep-learning-remote-runner) toolkit to easily launch our pre-training on a remote DL1 Instance from our local setup. You can check-out [Deep Learning setup made easy with EC2 Remote Runner and Habana Gaudi](https://www.philschmid.de/habana-gaudi-ec2-runner) if you want to know more about how this works.
```python
!pip install rm-runner
```
When using GPUs you would use the [Trainer](https://huggingface.co/docs/transformers/v4.19.4/en/main_classes/trainer#transformers.Trainer) and [TrainingArguments](https://huggingface.co/docs/transformers/v4.19.4/en/main_classes/trainer#transformers.TrainingArguments). Since we are going to run our training on Habana Gaudi we are leveraging the `optimum-habana` library, we can use the [GaudiTrainer](https://huggingface.co/docs/optimum/main/en/habana_trainer) and GaudiTrainingArguments instead. The `GaudiTrainer` is a wrapper around the [Trainer](https://huggingface.co/docs/transformers/v4.19.4/en/main_classes/trainer#transformers.Trainer) that allows you to pre-train or fine-tune a transformer model on Habana Gaudi instances.
```diff
-from transformers import Trainer, TrainingArguments
+from optimum.habana import GaudiTrainer, GaudiTrainingArguments
# define the training arguments
-training_args = TrainingArguments(
+training_args = GaudiTrainingArguments(
+ use_habana=True,
+ use_lazy_mode=True,
+ gaudi_config_name=path_to_gaudi_config,
...
)
# Initialize our Trainer
-trainer = Trainer(
+trainer = GaudiTrainer(
model=model,
args=training_args,
train_dataset=train_dataset
... # other arguments
)
```
The `DL1` instance we use has 8 available HPU-cores meaning we can leverage distributed data-parallel training for our model.
To run our training as distributed training we need to create a training script, which can be used with multiprocessing to run on all HPUs.
We have created a [run_mlm.py](https://github.com/philschmid/deep-learning-habana-huggingface/blob/master/pre-training/scripts/run_mlm.py) script implementing masked-language modeling using the `GaudiTrainer`. To execute our distributed training we use the `DistributedRunner` runner from `optimum-habana` and pass our arguments. Alternatively, you could check-out the [gaudi_spawn.py](https://github.com/huggingface/optimum-habana/blob/main/examples/gaudi_spawn.py) in the [optimum-habana](https://github.com/huggingface/optimum-habana) repository.
Before we can start our training we need to define the `hyperparameters` we want to use for our training. We are leveraging the [Hugging Face Hub](https://huggingface.co/models) integration of the `GaudiTrainer` to automatically push our checkpoints, logs and metrics during training into a repository.
```python
from huggingface_hub import HfFolder
# hyperparameters
hyperparameters = {
"model_config_id": "bert-base-uncased",
"dataset_id": "philschmid/processed_bert_dataset",
"tokenizer_id": "philschmid/bert-base-uncased-2022-habana",
"gaudi_config_id": "philschmid/bert-base-uncased-2022-habana",
"repository_id": "bert-base-uncased-2022",
"hf_hub_token": HfFolder.get_token(), # need to be logged in with `huggingface-cli login`
"max_steps": 100_000,
"per_device_train_batch_size": 32,
"learning_rate": 5e-5,
}
hyperparameters_string = " ".join(f"--{key} {value}" for key, value in hyperparameters.items())
```
We can start our training by creating a `EC2RemoteRunner` and then `launch` it. This will then start our AWS EC2 DL1 instance and run our `run_mlm.py` script on it using the `huggingface/optimum-habana:latest` container.
```python
from rm_runner import EC2RemoteRunner
# create ec2 remote runner
runner = EC2RemoteRunner(
instance_type="dl1.24xlarge",
profile="hf-sm", # adjust to your profile
region="us-east-1",
container="huggingface/optimum-habana:4.21.1-pt1.11.0-synapse1.5.0"
)
# launch my script with gaudi_spawn for distributed training
runner.launch(
command=f"python3 gaudi_spawn.py --use_mpi --world_size=8 run_mlm.py {hyperparameters_string}",
source_dir="scripts",
)
```
![architecture overview](/static/blog/pre-training-bert-habana/tensorboard.png)
_This [experiment](https://huggingface.co/philschmid/bert-base-uncased-2022-habana-test-6) ran for 60k steps_
In our `hyperparameters` we defined a `max_steps` property, which limited the pre-training to only `100_000` steps. The `100_000` steps with a global batch size of `256` took around 12,5 hours.
BERT was originally pre-trained on [1 Million Steps](https://arxiv.org/pdf/1810.04805.pdf) with a global batch size of `256`:
> We train with batch size of 256 sequences (256 sequences \* 512 tokens = 128,000 tokens/batch) for 1,000,000 steps, which is approximately 40 epochs over the 3.3 billion word corpus.
Meaning if we want to do a full pre-training it would take around 125h hours (12,5 hours \* 10) and would cost us around ~$1,650 using Habana Gaudi on AWS, which is extremely cheap.
For comparison, the DeepSpeed Team, who holds the record for the [fastest BERT-pretraining](https://www.deepspeed.ai/tutorials/bert-pretraining/), [reported](https://www.deepspeed.ai/tutorials/bert-pretraining/) that pre-training BERT on 1 [DGX-2](https://www.nvidia.com/en-us/data-center/dgx-2/) (powered by 16 NVIDIA V100 GPUs with 32GB of memory each) takes around 33,25 hours.
To compare the cost we can use the [p3dn.24xlarge](https://aws.amazon.com/de/ec2/instance-types/p3/) as reference, which comes with 8x NVIDIA V100 32GB GPUs and costs 31,22\$/h. We would need two of these instances to have the same "setup" as the one DeepSpeed reported, for now we are ignoring any overhead created to the multi-node setup (I/O, Network etc.).
This would bring the cost of the DeepSpeed GPU based training on AWS to around ~$2,075, which is 25% more than what Habana Gaudi currently delivers.
_Something to note here is that using [DeepSpeed](https://www.deepspeed.ai/tutorials/bert-pretraining/#deepspeed-single-gpu-throughput-results) in general improves the performance by a factor of ~1.5 - 2. A factor of ~1.5 - 2x, means that the same pre-training job without DeepSpeed would likely take twice as long and cost twice as much or ~$3-4k._
We are looking forward on to do the experiment again once the [Gaudi DeepSpeed integration](https://docs.habana.ai/en/latest/PyTorch/DeepSpeed/DeepSpeed_User_Guide.html#deepspeed-configs) is more widely available.
## Conclusion
That's it for this Tutorial. Now you know the basics on how to pre-train BERT from scratch using Hugging Face Transformers and Habana Gaudi. You also saw how easy it is to migrate from the `Trainer` to the `GaudiTrainer`.
We compared our implementation with the [fastest BERT-pretraining](https://www.deepspeed.ai/Tutorials/bert-pretraining/) results and saw that Habana Gaudi still delivers a 25% cost reduction and allows us to pre-train BERT for ~$1,650.
Those results are incredible since it will allow companies to adapt their pre-trained models to their language and domain to [improve accuracy up to 10%](https://huggingface.co/pile-of-law/legalbert-large-1.7M-1#evaluation-results) compared to the general BERT models.
If you are interested in training your own BERT or other Transformers models from scratch to reduce cost and improve accuracyy, [contact our experts](mailto:expert-acceleration@huggingface.co) to learn about our [Expert Acceleration Program](https://huggingface.co/support). To learn more about Habana solutions, [read about our partnership and how to contact them](https://huggingface.co/hardware/habana).
Code: [pre-training-bert.ipynb](https://github.com/philschmid/deep-learning-habana-huggingface/blob/master/pre-training/pre-training-bert.ipynb)
---
Thanks for reading! If you have any questions, feel free to contact me, through [Github](https://github.com/huggingface/transformers), or on the [forum](https://discuss.huggingface.co/c/optimum/59). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Deploy BigScience T0_3B to AWS & Amazon SageMaker | https://www.philschmid.de/deploy-bigscience-t0-3b-to-aws-and-amazon-sagemaker | 2021-10-20 | [
"AWS",
"Shorts",
"HuggingFace",
"Sagemaker"
] | 🌸 BigScience released their first modeling paper introducing T0 which outperforms GPT-3 on many zero-shot tasks while being 16x smaller! Deploy BigScience the 3 Billion version (T0_3B) to Amazon SageMaker with a few lines of code to run a scalable production workload! | Earlier this week 🌸 [BigScience](https://bigscience.huggingface.co/) released their first [modeling paper](https://arxiv.org/abs/2110.08207) for the collaboration introducing [T0\*](https://huggingface.co/bigscience/T0_3B). For those of you who haven't heard about 🌸 [BigScience](https://bigscience.huggingface.co/) it is a open collaboration of 600 researchers from 50 countries and +250 institutions creating large multilingual neural network language models and very large multilingual text datasets together using the Jean Zay (IDRIS) supercomputer.
The [paper](https://arxiv.org/pdf/2110.08207.pdf) introduces a new model [T0\*](https://huggingface.co/bigscience/T0_3B), which is a series of encoder-decoder models trained on a large set of different tasks specified in natural language prompts. You can learn more about T0\* on the [Hugging Face model card](https://huggingface.co/bigscience/T0_3B). But in short [T0\*](https://huggingface.co/bigscience/T0_3B) outperforms GPT-3 on many zero-shot tasks while being 16x smaller!
![t0](/static/blog/deploy-bigscience-t0-3b-to-aws-and-amazon-sagemaker/t0.png)
Image from [Multitask Prompted Training Enables Zero-Shot Task Generalization](https://arxiv.org/abs/2110.08207)
We will take advantage of this downsizing and deploy the model to AWS & Amazon SageMaker with just a few lines of code for production scale workloads.
Check out my other blog post ["Scalable, Secure Hugging Face Transformer Endpoints with Amazon SageMaker, AWS Lambda, and CDK"](https://www.philschmid.de/huggingface-transformers-cdk-sagemaker-lambda) to learn how you could create a secure public-facing `T0_3B` API.
---
## Tutorial
If you’re not familiar with Amazon SageMaker: _“Amazon SageMaker is a fully managed service that provides every developer and data scientist with the ability to build, train, and deploy machine learning (ML) models quickly. SageMaker removes the heavy lifting from each step of the machine learning process to make it easier to develop high-quality models.”_ [[REF]](https://aws.amazon.com/sagemaker/faqs/)
**What are we going to do:**
- Setting up the environment
- deploy `T0_3B` to Amazon SageMaker
- Run inference and test the Model
![sm-endpoint.png](/static/blog/deploy-bigscience-t0-3b-to-aws-and-amazon-sagemaker/sm-endpoint.png)
### Setting up the environment
We will use an Amazon SageMaker Notebook Instance for the example. You can learn **[here how to set up a Notebook Instance.](https://docs.aws.amazon.com/sagemaker/latest/dg/nbi.html)** To get started, jump into your Jupyter Notebook or JupyterLab and create a new Notebook with the **`conda_pytorch_p36`** kernel.
**_Note: The use of Jupyter is optional: We could also use a Laptop, another IDE, or a task scheduler like Airflow or AWS Step Functions when having appropriate permissions._**
After that, we can install the required dependencies.
```bash
pip install "sagemaker>=2.48.0" --upgrade
```
To deploy a model on SageMaker, we need to provide an IAM role with the right permission. The **`get_execution_role`** method is provided by the SageMaker SDK as an optional convenience (only available in Notebook Instances and Studio).
```python
import sagemaker
role = sagemaker.get_execution_role()
```
### Deploy `T0_3B` to Amazon SageMaker
To deploy a `T0_3B` directly from the [Hugging Face Model Hub](http://hf.co/models) to Amazon SageMaker, we need to define two environment variables when creating the **`HuggingFaceModel`**. We need to define:
- HF_MODEL_ID: defines the model id, which will be automatically loaded from **[huggingface.co/models](http://huggingface.co/models)** when creating or SageMaker Endpoint.
- HF_TASK: defines the task for the used 🤗 Transformers pipeline.
```python
from sagemaker.huggingface.model import HuggingFaceModel
# Hub Model configuration. <https://huggingface.co/models>
hub = {
'HF_MODEL_ID':'bigscience/T0_3B', # model_id from hf.co/models
'HF_TASK':'text2text-generation' # NLP task you want to use for predictions
}
# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
transformers_version='4.6.1',
pytorch_version='1.7.1',
py_version='py36',
env=hub,
role=role
)
```
After we create our `HuggingFaceModel` instance we can run `.deploy()` and provide our required infrastructure configuration. Since the model is pretty big we are going to use the `ml.g4dn.2xlarge` instance type.
```python
# deploy model to SageMaker Inference
predictor = huggingface_model.deploy(
initial_instance_count=1,
instance_type='ml.g4dn.2xlarge'
)
```
This will start the deployment of our model and the endpoint should be up and ready for inference after a few minutes.
### Run inference and test the Model
The `.deploy` method is returning a `HuggingFacePredictor`, which we can use to immediately run inference against our model after it is up and ready.
```python
predictor.predict({
'inputs': "Is this review positive or negative? Review: Best cast iron skillet you will every buy."
})
# ✅ [{'generated_text': 'Positive'}]
predictor.predict({
'inputs': "A is the son's of B's uncle. What is the family relationship between A and B?"
})
# ✅ [{'generated_text': "B is A's cousin."}]
```
After we run our inference we can delete the endpoint again.
```python
# delete endpoint
predictor.delete_endpoint()
```
## Conclusion
This short blog posts how you can easily deploy and run inference on `T0_3B` in secure, controlled & managed environments. The Endpoint can be integrated into Applications already or you could create a public-facing API out of it by adding a AWS Lambda Wrapper. Check out my other blog post ["Scalable, Secure Hugging Face Transformer Endpoints with Amazon SageMaker, AWS Lambda, and CDK"](https://www.philschmid.de/huggingface-transformers-cdk-sagemaker-lambda) for this.
But the biggest thanks goes to the 🌸 [BigScience](https://bigscience.huggingface.co/) collaboration for creating and sharing the results of their great work. I am so grateful that open-science & open-source exist and are being pushed forward.
---
Thanks for reading. If you have any questions, feel free to contact me, through [Github](https://github.com/huggingface/transformers), or on the [forum](https://discuss.huggingface.co/c/sagemaker/17). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Stable Diffusion on Amazon SageMaker | https://www.philschmid.de/sagemaker-stable-diffusion | 2022-11-01 | [
"AWS",
"HuggingFace",
"SageMaker",
"Diffusion"
] | Learn how to deploy Stable Diffusion to Amazon SageMaker to generate images. | Welcome to this Amazon SageMaker guide on how to use the [Stable Diffusion](https://huggingface.co/blog/stable_diffusion) to generate image for a given input prompt. We will deploy [CompVis/stable-diffusion-v1-4](https://huggingface.co/CompVis/stable-diffusion-v1-4) to Amazon SageMake for real-time inference using Hugging Faces [🧨 Diffusers library](https://huggingface.co/docs/diffusers/index).
![stable-diffusion-on-amazon-sagemaker](/static/blog/sagemaker-stable-diffusion/sd-on-sm.png)
What we are going to do:
1. [Create Stable Diffusion inference script](#1-create-stable-diffusion-inference-script)
2. [Create SageMaker `model.tar.gz` artifact](#2-create-sagemaker-modeltargz-artifact)
3. [Deploy the model to Amazon SageMaker](#3-deploy-the-model-to-amazon-sagemaker)
4. [Generate images using the deployed model](#4-generate-images-using-the-deployed-model)
## What is Stable Diffusion?
Stable Diffusion is a text-to-image latent diffusion model created by the researchers and engineers from [CompVis](https://github.com/CompVis), [Stability AI](https://stability.ai/) and [LAION](https://laion.ai/). It is trained on 512x512 images from a subset of the [LAION-5B](https://laion.ai/blog/laion-5b/) database. LAION-5B is the largest, freely accessible multi-modal dataset that currently exists.
This guide will not explain how the model works. If you are interested you should checkout the [Stable Diffusion with 🧨 Diffusers
](https://huggingface.co/blog/stable_diffusion) blog post or [The Annotated Diffusion Model](https://huggingface.co/blog/annotated-diffusion)
![stable-diffusion](/static/blog/sagemaker-stable-diffusion/stable-diffusion-arch.png)
---
Before we can get started, make sure you have [Hugging Face user account](https://huggingface.co/join). The account is needed to load the [CompVis/stable-diffusion-v1-4](https://huggingface.co/CompVis/stable-diffusion-v1-4) from the [Hugging Face Hub](https://huggingface.co/).
Create account: https://huggingface.co/join
Before we can get started we have to install the missing dependencies to be able to create our `model.tar.gz` artifact and create our Amazon SageMaker endpoint.
We also have to make sure we have the permission to create our SageMaker Endpoint.
```python
!pip install "sagemaker==2.116.0" "huggingface_hub==0.10.1" --upgrade --quiet
```
_If you are going to use Sagemaker in a local environment (not SageMaker Studio or Notebook Instances). You need access to an IAM Role with the required permissions for Sagemaker. You can find [here](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) more about it._
```python
import sagemaker
import boto3
sess = sagemaker.Session()
# sagemaker session bucket -> used for uploading data, models and logs
# sagemaker will automatically create this bucket if it not exists
sagemaker_session_bucket=None
if sagemaker_session_bucket is None and sess is not None:
# set to default bucket if a bucket name is not given
sagemaker_session_bucket = sess.default_bucket()
try:
role = sagemaker.get_execution_role()
except ValueError:
iam = boto3.client('iam')
role = iam.get_role(RoleName='sagemaker_execution_role')['Role']['Arn']
sess = sagemaker.Session(default_bucket=sagemaker_session_bucket)
print(f"sagemaker role arn: {role}")
print(f"sagemaker bucket: {sess.default_bucket()}")
print(f"sagemaker session region: {sess.boto_region_name}")
```
## 1. Create Stable Diffusion inference script
Amazon SageMaker allows us to customize the inference script by providing a `inference.py` file. The `inference.py` file is the entry point to our model. It is responsible for loading the model and handling the inference request. If you are used to deploying Hugging Face Transformers that might be knew to you. Usually, we just provide the `HF_MODEL_ID` and `HF_TASK` and the Hugging Face DLC takes care of the rest. For `diffusers` thats not yet possible. We have to provide the `inference.py` file and implement the `model_fn` and `predict_fn` functions.
If you want to learn more about creating a custom inference script you can check out [Creating document embeddings with Hugging Face's Transformers & Amazon SageMaker](https://www.philschmid.de/custom-inference-huggingface-sagemaker)
In addition to the `inference.py` file we also have to provide a `requirements.txt` file. The `requirements.txt` file is used to install the dependencies for our `inference.py` file.
The first step is to create a `code/` directory.
```python
!mkdir code
```
As next we create a `requirements.txt` file and add the `diffusers` library to it.
```python
%%writefile code/requirements.txt
diffusers==0.6.0
transformers==4.23.1
```
The last step for our inference handler is to create the `inference.py` file. The `inference.py` file is responsible for loading the model and handling the inference request. The `model_fn` function is called when the model is loaded. The `predict_fn` function is called when we want to do inference.
We are using the `diffusers` library to load the model in the `model_fn` and generate 4 image for an input prompt with the `predict_fn`. The `predict_fn` function returns the `4` image as a `base64` encoded string.
```python
%%writefile code/inference.py
import base64
import torch
from io import BytesIO
from diffusers import StableDiffusionPipeline
def model_fn(model_dir):
# Load stable diffusion and move it to the GPU
pipe = StableDiffusionPipeline.from_pretrained(model_dir, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
return pipe
def predict_fn(data, pipe):
# get prompt & parameters
prompt = data.pop("inputs", data)
# set valid HP for stable diffusion
num_inference_steps = data.pop("num_inference_steps", 50)
guidance_scale = data.pop("guidance_scale", 7.5)
num_images_per_prompt = data.pop("num_images_per_prompt", 4)
# run generation with parameters
generated_images = pipe(
prompt,
num_inference_steps=num_inference_steps,
guidance_scale=guidance_scale,
num_images_per_prompt=num_images_per_prompt,
)["images"]
# create response
encoded_images = []
for image in generated_images:
buffered = BytesIO()
image.save(buffered, format="JPEG")
encoded_images.append(base64.b64encode(buffered.getvalue()).decode())
# create response
return {"generated_images": encoded_images}
```
## 2. Create SageMaker `model.tar.gz` artifact
To use our `inference.py` we need to bundle it together with our model weights into a `model.tar.gz`. The archive includes all our model-artifcats to run inference. The `inference.py` script will be placed into a `code/` folder. We will use the `huggingface_hub` SDK to easily download [CompVis/stable-diffusion-v1-4](https://huggingface.co/CompVis/stable-diffusion-v1-4) from [Hugging Face](https://huggingface.co/CompVis/stable-diffusion-v1-4) and then upload it to Amazon S3 with the `sagemaker` SDK.
Before we can load our model from the Hugging Face Hub we have to make sure that we accepted the license of [CompVis/stable-diffusion-v1-4](https://huggingface.co/CompVis/stable-diffusion-v1-4) to be able to use it. [CompVis/stable-diffusion-v1-4](https://huggingface.co/CompVis/stable-diffusion-v1-4) is published under the [CreativeML OpenRAIL-M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license). You can accept the license by clicking on the `Agree and access repository` button on the model page at: https://huggingface.co/CompVis/stable-diffusion-v1-4.
![license](/static/blog/sagemaker-stable-diffusion/license.png)
_Note: This will give access to the repository for the logged in user. This user can then be used to generate [HF Tokens](https://huggingface.co/settings/tokens) to load the model programmatically._
Before we can load the model make sure you have a valid [HF Token](https://huggingface.co/settings/token). You can create a token by going to your [Hugging Face Settings](https://huggingface.co/settings/token) and clicking on the `New token` button. Make sure the enviornment has enough diskspace to store the model, ~30GB should be enough.
```python
from distutils.dir_util import copy_tree
from pathlib import Path
from huggingface_hub import snapshot_download
import random
HF_MODEL_ID="CompVis/stable-diffusion-v1-4"
HF_TOKEN="" # your hf token: https://huggingface.co/settings/tokens
assert len(HF_TOKEN) > 0, "Please set HF_TOKEN to your huggingface token. You can find it here: https://huggingface.co/settings/tokens"
# download snapshot
snapshot_dir = snapshot_download(repo_id=HF_MODEL_ID,revision="fp16",use_auth_token=HF_TOKEN)
# create model dir
model_tar = Path(f"model-{random.getrandbits(16)}")
model_tar.mkdir(exist_ok=True)
# copy snapshot to model dir
copy_tree(snapshot_dir, str(model_tar))
```
The next step is to copy the `code/` directory into the `model/` directory.
```python
# copy code/ to model dir
copy_tree("code/", str(model_tar.joinpath("code")))
```
Before we can upload the model to Amazon S3 we have to create a `model.tar.gz` archive. Important is that the archive should directly contain all files and not a folder with the files. For example, your file should look like this:
```
model.tar.gz/
|- model_index.json
|- unet/
|- code/
```
```python
import tarfile
import os
# helper to create the model.tar.gz
def compress(tar_dir=None,output_file="model.tar.gz"):
parent_dir=os.getcwd()
os.chdir(tar_dir)
with tarfile.open(os.path.join(parent_dir, output_file), "w:gz") as tar:
for item in os.listdir('.'):
print(item)
tar.add(item, arcname=item)
os.chdir(parent_dir)
compress(str(model_tar))
```
After we created the `model.tar.gz` archive we can upload it to Amazon S3. We will use the `sagemaker` SDK to upload the model to our sagemaker session bucket.
```python
from sagemaker.s3 import S3Uploader
# upload model.tar.gz to s3
s3_model_uri=S3Uploader.upload(local_path="model.tar.gz", desired_s3_uri=f"s3://{sess.default_bucket()}/stable-diffusion-v1-4")
print(f"model uploaded to: {s3_model_uri}")
```
## 3. Deploy the model to Amazon SageMaker
After we have uploaded our model archive we can deploy our model to Amazon SageMaker. We will use `HuggingfaceModel` to create our real-time inference endpoint.
We are going to deploy the model to an `g4dn.xlarge` instance. The `g4dn.xlarge` instance is a GPU instance with 1 NVIDIA Tesla T4 GPUs. If you are interested in how you could add autoscaling to your endpoint you can check out [Going Production: Auto-scaling Hugging Face Transformers with Amazon SageMaker](https://www.philschmid.de/auto-scaling-sagemaker-huggingface).
```python
from sagemaker.huggingface.model import HuggingFaceModel
# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
model_data=s3_model_uri, # path to your model and script
role=role, # iam role with permissions to create an Endpoint
transformers_version="4.17", # transformers version used
pytorch_version="1.10", # pytorch version used
py_version='py38', # python version used
)
# deploy the endpoint endpoint
predictor = huggingface_model.deploy(
initial_instance_count=1,
instance_type="ml.g4dn.xlarge"
)
```
## 4. Generate images using the deployed model
The `.deploy()` returns an `HuggingFacePredictor` object which can be used to request inference. Our endpoint expects a `json` with at least `inputs` key. The `inputs` key is the input prompt for the model, which will be used to generate the image. Additionally, we can provide `num_inference_steps`, `guidance_scale` & `num_images_per_prompt` to controll the generation.
The `predictor.predict()` function returns a `json` with the `generated_images` key. The `generated_images` key contains the `4` generated images as a `base64` encoded string. To decode our response we added a small helper function `decode_base64_to_image` which takes the `base64` encoded string and returns a `PIL.Image` object and `display_images`, which takes a list of `PIL.Image` objects and displays them.
```python
from PIL import Image
from io import BytesIO
from IPython.display import display
import base64
import matplotlib.pyplot as plt
# helper decoder
def decode_base64_image(image_string):
base64_image = base64.b64decode(image_string)
buffer = BytesIO(base64_image)
return Image.open(buffer)
# display PIL images as grid
def display_images(images=None,columns=3, width=100, height=100):
plt.figure(figsize=(width, height))
for i, image in enumerate(images):
plt.subplot(int(len(images) / columns + 1), columns, i + 1)
plt.axis('off')
plt.imshow(image)
```
Now, lets generate some images. As example lets generate `3` images for the prompt `A dog trying catch a flying pizza art drawn by disney concept artists`. Generating `3` images takes around `30` seconds.
```python
num_images_per_prompt = 3
prompt = "A dog trying catch a flying pizza art drawn by disney concept artists, golden colour, high quality, highly detailed, elegant, sharp focus"
# run prediction
response = predictor.predict(data={
"inputs": prompt,
"num_images_per_prompt" : num_images_per_prompt
}
)
# decode images
decoded_images = [decode_base64_image(image) for image in response["generated_images"]]
# visualize generation
display_images(decoded_images)
```
![png](/static/blog/sagemaker-stable-diffusion/result.png)
### Delete model and endpoint
To clean up, we can delete the model and endpoint.
```python
predictor.delete_model()
predictor.delete_endpoint()
```
---
Thanks for reading! If you have any questions, feel free to contact me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Going Production: Auto-scaling Hugging Face Transformers with Amazon SageMaker | https://www.philschmid.de/auto-scaling-sagemaker-huggingface | 2021-10-29 | [
"AWS",
"BERT",
"HuggingFace",
"Sagemaker"
] | Learn how to add auto-scaling to your Hugging Face Transformers SageMaker Endpoints. | Welcome to this getting started guide, we will use the new Hugging Face Inference DLCs and Amazon SageMaker Python SDK to deploy a transformer model for real-time inference.
In this example, we are going to deploy a trained Hugging Face Transformer model onto SageMaker for inference.
```python
!pip install "sagemaker>=2.66.2" --upgrade
```
```python
import sagemaker
sagemaker.__version__
# '2.66.2.post0'
```
Reference Blog post [Configuring autoscaling inference endpoints in Amazon SageMaker](https://aws.amazon.com/de/blogs/machine-learning/configuring-autoscaling-inference-endpoints-in-amazon-sagemaker/)
## Deploy s Hugging Face Transformer model to Amazon SageMaker for Inference
To deploy a model directly from the Hub to SageMaker we need to define 2 environment variables when creating the `HuggingFaceModel`. We need to define:
- `HF_MODEL_ID`: defines the model id, which will be automatically loaded from [huggingface.co/models](http://huggingface.co/models) when creating or SageMaker Endpoint. The 🤗 Hub provides +10 000 models all available through this environment variable.
- `HF_TASK`: defines the task for the used 🤗 Transformers pipeline. A full list of tasks can be found [here](https://huggingface.co/transformers/main_classes/pipelines.html).
```python
from sagemaker.huggingface import HuggingFaceModel
from uuid import uuid4
import sagemaker
role = sagemaker.get_execution_role()
# Hub Model configuration. https://huggingface.co/models
hub = {
'HF_MODEL_ID':'yiyanghkust/finbert-tone', # model_id from hf.co/models
'HF_TASK':'text-classification' # NLP task you want to use for predictions
}
# endpoint name
endpoint_name=f'{hub["HF_MODEL_ID"].split("/")[1]}-{str(uuid4())}' # model and endpoint name
# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
env=hub,
role=role, # iam role with permissions to create an Endpoint
name=endpoint_name, # model and endpoint name
transformers_version="4.11", # transformers version used
pytorch_version="1.9", # pytorch version used
py_version="py38", # python version of the DLC
)
```
Next step is to deploy our endpoint.
```python
# deploy model to SageMaker Inference
predictor = huggingface_model.deploy(
initial_instance_count=1,
instance_type="ml.c5.large"
)
# get aws region for dashboards
aws_region = predictor.sagemaker_session.boto_region_name
```
**Architecture**
The [Hugging Face Inference Toolkit for SageMaker](https://github.com/aws/sagemaker-huggingface-inference-toolkit) is an open-source library for serving Hugging Face transformer models on SageMaker. It utilizes the SageMaker Inference Toolkit for starting up the model server, which is responsible for handling inference requests. The SageMaker Inference Toolkit uses [Multi Model Server (MMS)](https://github.com/awslabs/multi-model-server) for serving ML models. It bootstraps MMS with a configuration and settings that make it compatible with SageMaker and allow you to adjust important performance parameters, such as the number of workers per model, depending on the needs of your scenario.
![](/static/blog/auto-scaling-sagemaker-huggingface/hf-inference-toolkit.jpg)
**Deploying a model using SageMaker hosting services is a three-step process:**
1. **Create a model in SageMaker** —By creating a model, you tell SageMaker where it can find the model components.
2. **Create an endpoint configuration for an HTTPS endpoint** —You specify the name of one or more models in production variants and the ML compute instances that you want SageMaker to launch to host each production variant.
3. **Create an HTTPS endpoint** —Provide the endpoint configuration to SageMaker. The service launches the ML compute instances and deploys the model or models as specified in the configuration
![](/static/blog/auto-scaling-sagemaker-huggingface/sm-endpoint.png)
After the endpoint is deployed we can use the `predictor` to send requests.
```python
# example request, you always need to define "inputs"
data = {
"inputs": "There is a shortage of capital for project SageMaker. We need extra financing"
}
# request
predictor.predict(data)
# [{'label': 'negative', 'score': 0.9870443940162659}]
```
## Model Monitoring
To properly monitor our endpoint lets send a few hundred requests.
```python
for i in range(500):
predictor.predict(data)
```
After that we can go to the cloudwatch dashboard to take a look.
```python
print(f"https://console.aws.amazon.com/cloudwatch/home?region={aws_region}#metricsV2:graph=~(metrics~(~(~'AWS*2fSageMaker~'ModelLatency~'EndpointName~'finbert-tone-73d26f97-9376-4b3f-9334-a2-2021-10-29-12-18-52-365~'VariantName~'AllTraffic))~view~'timeSeries~stacked~false~start~'-PT15M~end~'P0D~region~'{aws_region}~stat~'SampleCount~period~30);query=~'*7bAWS*2fSageMaker*2cEndpointName*2cVariantName*7d*20{predictor.endpoint_name}")
```
![model-monitoring-dashboard](/static/blog/auto-scaling-sagemaker-huggingface/model-monitoring-dashboard.png)
# Auto Scaling your Model
[Amazon SageMaker](https://aws.amazon.com/sagemaker/) is a fully managed service that provides every developer and data scientist with the ability to quickly build, train, and deploy machine learning (ML) models at scale.
Autoscaling is an out-of-the-box feature that monitors your workloads and dynamically adjusts the capacity to maintain steady and predictable performance at the possible lowest cost.
The following diagram is a sample architecture that showcases how a model is served as an endpoint with autoscaling enabled.
![autoscaling-endpoint](/static/blog/auto-scaling-sagemaker-huggingface/autoscaling-endpoint.png)
## Configure Autoscaling for our Endpoint
You can define the minimum, desired, and the maximum number of instances per endpoint and, based on the autoscaling configurations, instances are managed dynamically. The following diagram illustrates this architecture.
![scaling-options](/static/blog/auto-scaling-sagemaker-huggingface/scaling-options.jpeg)
AWS offers many different [ways to auto-scale your endpoints](https://docs.aws.amazon.com/autoscaling/application/userguide/application-auto-scaling-target-tracking.html). One of them Simple-Scaling, where you scale the instance capacity based on `CPUUtilization` of the instances or `SageMakerVariantInvocationsPerInstance`.
In this example we are going to use `CPUUtilization` to auto-scale our Endpoint
```python
import boto3
# Let us define a client to play with autoscaling options
asg_client = boto3.client('application-autoscaling') # Common class representing Application Auto Scaling for SageMaker amongst other services
# the resource type is variant and the unique identifier is the resource ID.
# Example: endpoint/my-bert-fine-tuned/variant/AllTraffic .
resource_id=f"endpoint/{predictor.endpoint_name}/variant/AllTraffic"
# scaling configuration
response = asg_client.register_scalable_target(
ServiceNamespace='sagemaker', #
ResourceId=resource_id,
ScalableDimension='sagemaker:variant:DesiredInstanceCount',
MinCapacity=1,
MaxCapacity=4
)
```
Create Scaling Policy with configuration details, e.g. `TargetValue` when the instance should be scaled.
```python
response = asg_client.put_scaling_policy(
PolicyName=f'CPUUtil-ScalingPolicy-{predictor.endpoint_name}',
ServiceNamespace='sagemaker',
ResourceId=resource_id,
ScalableDimension='sagemaker:variant:DesiredInstanceCount',
PolicyType='TargetTrackingScaling',
TargetTrackingScalingPolicyConfiguration={
'TargetValue': 50.0, # threshold
'CustomizedMetricSpecification':
{
'MetricName': 'CPUUtilization',
'Namespace': '/aws/sagemaker/Endpoints',
'Dimensions': [
{'Name': 'EndpointName', 'Value': predictor.endpoint_name },
{'Name': 'VariantName','Value': 'AllTraffic'}
],
'Statistic': 'Average', # Possible - 'Statistic': 'Average'|'Minimum'|'Maximum'|'SampleCount'|'Sum'
'Unit': 'Percent'
},
'ScaleInCooldown': 300, # duration until scale in
'ScaleOutCooldown': 100 # duration between scale out
}
)
```
stress test the endpoint with threaded requests
```python
from concurrent.futures import ThreadPoolExecutor
import os
workers = os.cpu_count() * 5
requests = 200
print(f"workers used for load test: {workers}")
with ThreadPoolExecutor(max_workers=workers) as executor:
for i in range(requests):
executor.submit(predictor.predict, data)
```
Monitor the `CPUUtilization` in cloudwatch
```python
print(f"https://console.aws.amazon.com/cloudwatch/home?region={aws_region}#metricsV2:graph=~(metrics~(~(~'*2faws*2fsagemaker*2fEndpoints~'CPUUtilization~'EndpointName~'finbert-tone-73d26f97-9376-4b3f-9334-a2-2021-10-29-12-18-52-365~'VariantName~'AllTraffic))~view~'timeSeries~stacked~false~region~'{aws_region}~start~'-PT15M~end~'P0D~stat~'Average~period~60);query=~'*7b*2faws*2fsagemaker*2fEndpoints*2cEndpointName*2cVariantName*7d*20finbert-tone-73d26f97-9376-4b3f-9334-a2-2021-10-29-12-18-52-365*20Endpoint*20{predictor.endpoint_name}*20has*20Current*20Instance*20Count*3a*201*20With*20a*20desired*20instance*20count*20of*201")
```
Now we check the endpoint instance_count number an see that SageMaker has scaled out.
```python
bt_sm = boto3.client('sagemaker')
response = bt_sm.describe_endpoint(EndpointName=predictor.endpoint_name)
print(f"Endpoint {response['EndpointName']} has \nCurrent Instance Count: {response['ProductionVariants'][0]['CurrentInstanceCount']}\nWith a desired instance count of {response['ProductionVariants'][0]['DesiredInstanceCount']}")
# Endpoint finbert-tone-73d26f97-9376-4b3f-9334-a2-2021-10-29-12-18-52-365 has
# Current Instance Count: 4
# With a desired instance count of 4
```
## Clean up
```python
# delete endpoint
predictor.delete_endpoint()
```
# Conclusion
With the help of the Autoscaling groups were we able to apply elasticity without heavy lifting. The endpoint now adapts to the incoming load and scales in and out as required.
Through the simplicity of SageMaker you don't need huge Ops-teams anymore to manage and scale your machine learning models. You can do it yourself.
---
You can find the code [here](https://github.com/huggingface/notebooks/blob/master/sagemaker/13_deploy_and_autoscaling_transformers/sagemaker-notebook.ipynb) and feel free open a thread the [forum](https://discuss.huggingface.co/).
Thanks for reading. If you have any questions, feel free to contact me, through [Github](https://github.com/huggingface/transformers), or on the [forum](https://discuss.huggingface.co/c/sagemaker/17). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Multi-Container Endpoints with Hugging Face Transformers and Amazon SageMaker | https://www.philschmid.de/sagemaker-huggingface-multi-container-endpoint | 2022-02-22 | [
"HuggingFace",
"AWS",
"BERT",
"SageMaker"
] | Learn how to deploy multiple Hugging Face Transformers for inference with Amazon SageMaker and Multi-Container Endpoints. | Welcome to this getting started guide. We will use the Hugging Face Inference DLCs and Amazon SageMaker to deploy multiple transformer models as [Multi-Container Endpoint](https://docs.aws.amazon.com/sagemaker/latest/dg/multi-container-endpoints.html).
Amazon SageMaker Multi-Container Endpoint is an inference option to deploy multiple containers (multiple models) to the same SageMaker real-time endpoint. These models/containers can be accessed individually or in a pipeline. Amazon SageMaker [Multi-Container Endpoint](https://docs.aws.amazon.com/sagemaker/latest/dg/multi-container-endpoints.html) can be used to improve endpoint utilization and optimize costs. An example for this is **time zone differences**, the workload for model A (U.S) is mostly at during the day and the workload for model B (Germany) is mostly during the night, you can deploy model A and model B to the same SageMaker endpoint and optimize your costs.
_**NOTE:** At the time of writing this, only `CPU` Instances are supported for Multi-Container Endpoint._
[notebook](https://github.com/philschmid/huggingface-sagemaker-multi-container-endpoint/blob/master/sagemaker-notebook.ipynb)
![mce](/static/blog/sagemaker-huggingface-multi-container-endpoint/mce.png)
## Development Environment and Permissions
_NOTE: You can run this demo in Sagemaker Studio, your local machine, or Sagemaker Notebook Instances_
```python
%pip install sagemaker --upgrade
import sagemaker
assert sagemaker.__version__ >= "2.75.0"
```
### Permissions
_If you are going to use Sagemaker in a local environment (not SageMaker Studio or Notebook Instances). You need access to an IAM Role with the required permissions for Sagemaker. You can find [here](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) more about it._
```python
import sagemaker
import boto3
sess = sagemaker.Session()
# sagemaker session bucket -> used for uploading data, models and logs
# sagemaker will automatically create this bucket if it not exists
sagemaker_session_bucket=None
if sagemaker_session_bucket is None and sess is not None:
# set to default bucket if a bucket name is not given
sagemaker_session_bucket = sess.default_bucket()
try:
role = sagemaker.get_execution_role()
except ValueError:
iam = boto3.client('iam')
role = iam.get_role(RoleName='sagemaker_execution_role')['Role']['Arn']
sess = sagemaker.Session(default_bucket=sagemaker_session_bucket)
region = sess.boto_region_name
sm_client = boto3.client('sagemaker')
print(f"sagemaker role arn: {role}")
print(f"sagemaker bucket: {sess.default_bucket()}")
print(f"sagemaker session region: {region}")
```
## Multi-Container Endpoint creation
When writing this does the [Amazon SageMaker Python SDK](https://sagemaker.readthedocs.io/en/stable/) not support Multi-Container Endpoint deployments. That's why we are going to use `boto3` to create the endpoint.
The first step though is to use the SDK to get our container uris for the Hugging Face Inference DLCs.
```python
from sagemaker import image_uris
hf_inference_dlc = image_uris.retrieve(framework='huggingface',
region=region,
version='4.12.3',
image_scope='inference',
base_framework_version='pytorch1.9.1',
py_version='py38',
container_version='ubuntu20.04',
instance_type='ml.c5.xlarge')
# '763104351884.dkr.ecr.us-east-1.amazonaws.com/huggingface-pytorch-inference:1.9.1-transformers4.12.3-gpu-py38-cu111-ubuntu20.04'
```
### Define Hugging Face models
Next, we need to define the models we want to deploy to our multi-container endpoint. To stick with our example from the introduction, we will deploy an English sentiment-classification model and a german sentiment-classification model. For the English model, we will use [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) and for the German model, we will use [oliverguhr/german-sentiment-bert](https://huggingface.co/oliverguhr/german-sentiment-bert).
Similar to the endpoint creation with the SageMaker SDK do we need to provide the "Hub" configurations for the models as `HF_MODEL_ID` and `HF_TASK`.
```python
# english model
englishModel = {
'Image': hf_inference_dlc,
'ContainerHostname': 'englishModel',
'Environment': {
'HF_MODEL_ID':'distilbert-base-uncased-finetuned-sst-2-english',
'HF_TASK':'text-classification'
}
}
# german model
germanModel = {
'Image': hf_inference_dlc,
'ContainerHostname': 'germanModel',
'Environment': {
'HF_MODEL_ID':'oliverguhr/german-sentiment-bert',
'HF_TASK':'text-classification'
}
}
# Set the Mode parameter of the InferenceExecutionConfig field to Direct for direct invocation of each container,
# or Serial to use containers as an inference pipeline. The default mode is Serial.
inferenceExecutionConfig = {"Mode": "Direct"}
```
## Create Multi-Container Endpoint
After we define our model configuration, we can deploy our endpoint. To create/deploy a real-time endpoint with `boto3` you need to create a "SageMaker Model", a "SageMaker Endpoint Configuration" and a "SageMaker Endpoint". The "SageMaker Model" contains our multi-container configuration including our two models. The "SageMaker Endpoint Configuration" contains the configuration for the endpoint. The "SageMaker Endpoint" is the actual endpoint.
```python
deployment_name = "multi-container-sentiment"
instance_type = "ml.c5.4xlarge"
# create SageMaker Model
sm_client.create_model(
ModelName = f"{deployment_name}-model",
InferenceExecutionConfig = inferenceExecutionConfig,
ExecutionRoleArn = role,
Containers = [englishModel, germanModel]
)
# create SageMaker Endpoint configuration
sm_client.create_endpoint_config(
EndpointConfigName= f"{deployment_name}-config",
ProductionVariants=[
{
"VariantName": "AllTraffic",
"ModelName": f"{deployment_name}-model",
"InitialInstanceCount": 1,
"InstanceType": instance_type,
},
],
)
# create SageMaker Endpoint configuration
endpoint = sm_client.create_endpoint(
EndpointName= f"{deployment_name}-ep", EndpointConfigName=f"{deployment_name}-config"
)
```
this will take a few minutes to deploy. You can check the console to see if the endpoint is in service
## Invoke Multi-Container Endpoint
To invoke our multi-container endpoint we can either use `boto3` or any other AWS SDK or the Amazon SageMaker SDK. We will test both ways and do some light load testing to take a look at the performance of our endpoint in cloudwatch.
```python
english_payload={"inputs":"This is a great way for saving money and optimizing my resources."}
german_payload={"inputs":"Das wird uns sehr helfen unsere Ressourcen effizient zu nutzen."}
```
### Sending requests with `boto3`
To send requests to our models we will use the `sagemaker-runtime` with the `invoke_endpoint` method. Compared to sending regular requests to a single-container endpoint we are passing `TargetContainerHostname` as additional information to point to the container, which should receive the request. In our case this is either `englishModel` or `germanModel`.
#### `englishModel`
```python
import json
import boto3
# create client
invoke_client = boto3.client('sagemaker-runtime')
# send request to first container (bi-encoder)
response = invoke_client.invoke_endpoint(
EndpointName=f"{deployment_name}-ep",
ContentType="application/json",
Accept="application/json",
TargetContainerHostname="englishModel",
Body=json.dumps(english_payload),
)
result = json.loads(response['Body'].read().decode())
```
#### `germanModel`
```python
import json
import boto3
# create client
invoke_client = boto3.client('sagemaker-runtime')
# send request to first container (bi-encoder)
response = invoke_client.invoke_endpoint(
EndpointName=f"{deployment_name}-ep",
ContentType="application/json",
Accept="application/json",
TargetContainerHostname="germanModel",
Body=json.dumps(german_payload),
)
result = json.loads(response['Body'].read().decode())
```
### Sending requests with `HuggingFacePredictor`
The Python SageMaker SDK can not be used for deploying Multi-Container Endpoints but can be used to invoke/send requests to those. We will use the `HuggingFacePredictor` to send requests to the endpoint, where we also pass the `TargetContainerHostname` as additional information to point to the container, which should receive the request. In our case this is either `englishModel` or `germanModel`.
```python
from sagemaker.huggingface import HuggingFacePredictor
# predictor
predictor = HuggingFacePredictor(f"{deployment_name}-ep")
# english request
en_res = predictor.predict(english_payload, initial_args={"TargetContainerHostname":"englishModel"})
print(en_res)
# german request
de_res = predictor.predict(german_payload, initial_args={"TargetContainerHostname":"germanModel"})
print(de_res)
```
### Load testing the multi-container endpoint
As mentioned, we are doing some light load-testing, meaning sending a few alternating requests to the containers and looking at the latency in cloudwatch.
```python
for i in range(1000):
predictor.predict(english_payload, initial_args={"TargetContainerHostname":"englishModel"})
predictor.predict(german_payload, initial_args={"TargetContainerHostname":"germanModel"})
# link to cloudwatch metrics dashboard
print("https://console.aws.amazon.com/cloudwatch/home?region=us-east-1#metricsV2:graph=~(metrics~(~(~'AWS*2fSageMaker~'ContainerLatency~'EndpointName~'multi-container-sentiment-ep~'ContainerName~'germanModel~'VariantName~'AllTraffic~(visible~false))~(~'...~'englishModel~'.~'.~(visible~false))~(~'.~'Invocations~'.~'.~'.~'.~'.~'.~(stat~'SampleCount))~(~'...~'germanModel~'.~'.~(stat~'SampleCount)))~view~'timeSeries~stacked~false~region~'us-east-1~stat~'Average~period~60~start~'-PT15M~end~'P0D);query=~'*7bAWS*2fSageMaker*2cContainerName*2cEndpointName*2cVariantName*7d")
```
We can see that the latency for the `englishModel` is around 2x faster than the for the `germanModel`, which makes sense since the `englishModel` is a DistilBERT model and the german one is a `BERT-base` model.
![latency](/static/blog/sagemaker-huggingface-multi-container-endpoint/latency.png)
In terms of invocations we can see both enpdoints are invocated the same amount, which makes sense since our test invoked both endpoints alternately.
![invocations](/static/blog/sagemaker-huggingface-multi-container-endpoint/invocations.png)
### Delete the Multi-Container Endpoint
```python
predictor.delete_model()
predictor.delete_endpoint()
```
## Conclusion
We successfully deployed two Hugging Face Transformers to Amazon SageMaer for inference using the Multi-Container Endpoint, which allowed using the same instance two host multiple models as a container for inference.
Multi-Container Endpoints are a great option to optimize compute utilization and costs for your models. Especially when you have independent inference workloads due to time differences or use-case differences.
You should try Multi-Container Endpoints for your models when you have workloads that are not correlated.
---
You can find the code [here](https://github.com/philschmid/huggingface-sagemaker-multi-container-endpoint/blob/master/sagemaker-notebook.ipynb).
Thanks for reading! If you have any questions, feel free to contact me, through [Github](https://github.com/huggingface/transformers), or on the [forum](https://discuss.huggingface.co/c/sagemaker/17). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Automatic Speech Recogntion with Hugging Face's Transformers & Amazon SageMaker | https://www.philschmid.de/automatic-speech-recognition-sagemaker | 2022-04-28 | [
"AWS",
"Wav2vec2",
"Speech",
"Sagemaker"
] | Learn how to do automatic speech recognition/speech-to-text with Hugging Face Transformers, Wav2vec2 and Amazon SageMaker. | Transformer models are changing the world of machine learning, starting with natural language processing, and now, with audio and computer vision. Hugging Face's mission is to democratize good machine learning and give anyone the opportunity to use these new state-of-the-art machine learning models.
Together with Amazon SageMaker and AWS have we been working on extending the functionalities of the Hugging Face Inference DLC and the Python SageMaker SDK to make it easier to use speech and vision models together with `transformers`.
You can now use the Hugging Face Inference DLC to do [automatic speech recognition](https://huggingface.co/tasks/automatic-speech-recognition) using MetaAIs [wav2vec2](https://arxiv.org/abs/2006.11477) model or Microsofts [WavLM](https://arxiv.org/abs/2110.13900) or use NVIDIAs [SegFormer](https://arxiv.org/abs/2105.15203) for [semantic segmentation](https://huggingface.co/tasks/image-segmentation).
This guide will walk you through how to do [automatic speech recognition](https://huggingface.co/tasks/automatic-speech-recognition) using [wav2veec2](https://huggingface.co/facebook/wav2vec2-base-960h) and new `DataSerializer`.
![automatic_speech_recognition](/static/blog/automatic-speech-recognition-sagemaker/automatic_speech_recognition.png)
In this example you will learn how to:
1. Setup a development Environment and permissions for deploying Amazon SageMaker Inference Endpoints.
2. Deploy a wav2vec2 model to Amazon SageMaker for automatic speech recogntion
3. Send requests to the endpoint to do speech recognition.
Let's get started! 🚀
---
_If you are going to use Sagemaker in a local environment (not SageMaker Studio or Notebook Instances). You need access to an IAM Role with the required permissions for Sagemaker. You can find [here](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) more about it._
## 1. Setup a development Environment and permissions for deploying Amazon SageMaker Inference Endpoints.
Setting up the development environment and permissions needs to be done for the automatic-speech-recognition example and the semantic-segmentation example. First we update the `sagemaker` SDK to make sure we have new `DataSerializer`.
```python
!pip install sagemaker --upgrade
import sagemaker
assert sagemaker.__version__ >= "2.86.0"
```
After we have update the SDK we can set the permissions.
_If you are going to use Sagemaker in a local environment (not SageMaker Studio or Notebook Instances). You need access to an IAM Role with the required permissions for Sagemaker. You can find [here](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) more about it._
```python
import sagemaker
import boto3
sess = sagemaker.Session()
# sagemaker session bucket -> used for uploading data, models and logs
# sagemaker will automatically create this bucket if it not exists
sagemaker_session_bucket=None
if sagemaker_session_bucket is None and sess is not None:
# set to default bucket if a bucket name is not given
sagemaker_session_bucket = sess.default_bucket()
try:
role = sagemaker.get_execution_role()
except ValueError:
iam = boto3.client('iam')
role = iam.get_role(RoleName='sagemaker_execution_role')['Role']['Arn']
sess = sagemaker.Session(default_bucket=sagemaker_session_bucket)
print(f"sagemaker role arn: {role}")
print(f"sagemaker bucket: {sess.default_bucket()}")
print(f"sagemaker session region: {sess.boto_region_name}")
```
## 2. Deploy a wav2vec2 model to Amazon SageMaker for automatic speech recogntion
Automatic Speech Recognition (ASR), also known as Speech to Text (STT), is the task of transcribing a given audio to text. It has many applications, such as voice user interfaces.
We use the [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) model running our recognition endpoint. This model is a fine-tune checkpoint of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) pretrained and fine-tuned on 960 hours of Librispeech on 16kHz sampled speech audio achieving 1.8/3.3 WER on the clean/other test sets.
```python
from sagemaker.huggingface.model import HuggingFaceModel
from sagemaker.serializers import DataSerializer
# Hub Model configuration. <https://huggingface.co/models>
hub = {
'HF_MODEL_ID':'facebook/wav2vec2-base-960h',
'HF_TASK':'automatic-speech-recognition',
}
# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
env=hub, # configuration for loading model from Hub
role=role, # iam role with permissions to create an Endpoint
transformers_version="4.17", # transformers version used
pytorch_version="1.10", # pytorch version used
py_version='py38', # python version used
)
```
Before we are able to deploy our `HuggingFaceModel` class we need to create a new serializer, which supports our audio data. The Serializer are used in Predictor and in the `predict` method to serializer our data to a specific `mime-type`, which send to the endpoint. The default serialzier for the HuggingFacePredcitor is a JSNON serializer, but since we are not going to send text data to the endpoint we will use the DataSerializer.
```python
# create a serializer for the data
audio_serializer = DataSerializer(content_type='audio/x-audio') # using x-audio to support multiple audio formats
# deploy model to SageMaker Inference
predictor = huggingface_model.deploy(
initial_instance_count=1, # number of instances
instance_type='ml.g4dn.xlarge', # ec2 instance type
serializer=audio_serializer, # serializer for our audio data.
)
```
## 3. Send requests to the endpoint to do speech recognition.
The `.deploy()` returns an `HuggingFacePredictor` object with our `DataSerializer` which can be used to request inference. This `HuggingFacePredictor` makes it easy to send requests to your endpoint and get the results back.
We will use 2 different methods to send requests to the endpoint:
a. Provide a audio file via path to the predictor
b. Provide binary audio data object to the predictor
### a. Provide a audio file via path to the predictor
Using a audio file as input is easy as easy as providing the path to its location. The `DataSerializer` will then read it and send the bytes to the endpoint.
We can use a `librispeech` sample hosted on huggingface.co
```python
!wget https://cdn-media.huggingface.co/speech_samples/sample1.flac
```
To send a request with provide our path to the audio file we can use the following code:
```python
audio_path = "sample1.flac"
res = predictor.predict(data=audio_path)
print(res)
# {'text': "GOING ALONG SLUSHY COUNTRY ROADS AND SPEAKING TO DAMP AUDIENCES IN DRAUGHTY SCHOOL ROOMS DAY AFTER DAY FOR A FORTNIGHT HE'LL HAVE TO PUT IN AN APPEARANCE AT SOME PLACE OF WORSHIP ON SUNDAY MORNING AND HE CAN COME TO US IMMEDIATELY AFTERWARDS"}
```
### b. Provide binary audio data object to the predictor
Instead of providing a path to the audio file we can also directy provide the bytes of it reading the file in python.
_make sure `sample1.flac` is in the directory_
```python
audio_path = "sample1.flac"
with open(audio_path, "rb") as data_file:
audio_data = data_file.read()
res = predictor.predict(data=audio_data)
print(res)
# {'text': "GOING ALONG SLUSHY COUNTRY ROADS AND SPEAKING TO DAMP AUDIENCES IN DRAUGHTY SCHOOL ROOMS DAY AFTER DAY FOR A FORTNIGHT HE'LL HAVE TO PUT IN AN APPEARANCE AT SOME PLACE OF WORSHIP ON SUNDAY MORNING AND HE CAN COME TO US IMMEDIATELY AFTERWARDS"}
```
### Clean up
```python
predictor.delete_model()
predictor.delete_endpoint()
```
## Conclusion
We succesfully managed to deploy Wav2vec2 to Amazon SageMaker for automatic speech recognition. The new `DataSerializer` makes it super easy to work with different `mime-types` than `json`/`txt`, which we are used to from NLP.
With this support we can now build state-of-the-art speech recognition systems on Amazon SageMaker with transparent insights on which models are used and how the data is processed. We could even go further and extend the inference part with a custom `inference.py` to include custom post-processing for grammar correction or punctuation. |
Save up to 90% training cost with AWS Spot Instances and Hugging Face Transformers | https://www.philschmid.de/sagemaker-spot-instance | 2022-03-22 | [
"AWS",
"HuggingFace",
"BERT",
"SageMaker"
] | Learn how to leverage AWS Spot Instances when training Hugging Face Transformers with Amazon SageMaker to save up to 90% training cost. | notebook: [sagemaker/05_spot_instances](https://github.com/huggingface/notebooks/blob/master/sagemaker/05_spot_instances/sagemaker-notebook.ipynb)
[Amazon EC2 Spot Instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-spot-instances.html) are a way to take advantage of unused EC2 capacity in the AWS cloud. A Spot Instance is an instance that uses spare EC2 capacity that is available for less than the On-Demand price. The hourly price for a Spot Instance is called a Spot price. If you want to learn more about Spot Instances, you should check out the concepts of it in the [documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-spot-instances.html#spot-pricing). One concept we should nevertheless briefly address here is `Spot Instance interruption`.
> Amazon EC2 terminates, stops, or hibernates your Spot Instance when Amazon EC2 needs the capacity back or the Spot price exceeds the maximum price for your request. Amazon EC2 provides a Spot Instance interruption notice, which gives the instance a two-minute warning before it is interrupted.
[Amazon SageMaker](https://docs.aws.amazon.com/sagemaker/latest/dg/model-managed-spot-training.html) and the [Hugging Face DLCs](https://huggingface.co/docs/sagemaker/main) make it easy to train transformer models using managed Spot instances. Managed spot training can optimize the cost of training models up to 90% over on-demand instances.
As we learned spot instances can be interrupted, causing jobs to potentially stop before they are finished. To prevent any loss of model weights or information Amazon SageMaker offers support for [remote S3 Checkpointing](https://docs.aws.amazon.com/sagemaker/latest/dg/model-checkpoints.html) where data from a local path to Amazon S3 is saved. When the job is restarted, SageMaker copies the data from Amazon S3 back into the local path.
![spot-overview](/static/blog/sagemaker-spot-instance/spot-overview.png)
In this example, we will learn how to use [managed Spot Training](https://docs.aws.amazon.com/sagemaker/latest/dg/model-managed-spot-training.html) and [S3 checkpointing](https://docs.aws.amazon.com/sagemaker/latest/dg/model-checkpoints.html) with Hugging Face Transformers to save up to 90% of the training costs.
We are going to:
- preprocess a dataset in the notebook and upload it to Amazon S3
- configure checkpointing and spot training in the `HuggingFace` estimator
- run training on a spot instance
_**NOTE: You can run this demo in Sagemaker Studio, your local machine, or Sagemaker Notebook Instances**_
## Development Environment and Permissions
_Note: we only install the required libraries from Hugging Face and AWS. You also need PyTorch or Tensorflow, if you haven´t it installed_
```python
!pip install "sagemaker>=2.77.0" "transformers==4.12.3" "datasets[s3]==1.18.3" s3fs --upgrade
```
## Permissions
_If you are going to use Sagemaker in a local environment (not SageMaker Studio or Notebook Instances). You need access to an IAM Role with the required permissions for Sagemaker. You can find [here](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) more about it._
```python
import sagemaker
import boto3
sess = sagemaker.Session()
# sagemaker session bucket -> used for uploading data, models and logs
# sagemaker will automatically create this bucket if it not exists
sagemaker_session_bucket=None
if sagemaker_session_bucket is None and sess is not None:
# set to default bucket if a bucket name is not given
sagemaker_session_bucket = sess.default_bucket()
try:
role = sagemaker.get_execution_role()
except ValueError:
iam = boto3.client('iam')
role = iam.get_role(RoleName='sagemaker_execution_role')['Role']['Arn']
sess = sagemaker.Session(default_bucket=sagemaker_session_bucket)
print(f"sagemaker role arn: {role}")
print(f"sagemaker bucket: {sess.default_bucket()}")
print(f"sagemaker session region: {sess.boto_region_name}")
```
## Preprocessing
We are using the `datasets` library to download and preprocess the `emotion` dataset. After preprocessing, the dataset will be uploaded to our `sagemaker_session_bucket` to be used within our training job. The [emotion](https://github.com/dair-ai/emotion_dataset) dataset consists of 16000 training examples, 2000 validation examples, and 2000 testing examples.
```python
from datasets import load_dataset
from transformers import AutoTokenizer
# model_id used for training and preprocessing
model_id = 'distilbert-base-uncased'
# dataset used
dataset_name = 'emotion'
# s3 key prefix for the data
s3_prefix = 'samples/datasets/emotion'
# download tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id)
# tokenizer helper function
def tokenize(batch):
return tokenizer(batch['text'], padding='max_length', truncation=True)
# load dataset
train_dataset, test_dataset = load_dataset(dataset_name, split=['train', 'test'])
# tokenize dataset
train_dataset = train_dataset.map(tokenize, batched=True)
test_dataset = test_dataset.map(tokenize, batched=True)
# set format for pytorch
train_dataset = train_dataset.rename_column("label", "labels")
train_dataset.set_format('torch', columns=['input_ids', 'attention_mask', 'labels'])
test_dataset = test_dataset.rename_column("label", "labels")
test_dataset.set_format('torch', columns=['input_ids', 'attention_mask', 'labels'])
```
After we processed the `datasets` we are going to use the new `FileSystem` [integration](https://huggingface.co/docs/datasets/filesystems.html) to upload our dataset to S3.
```python
import botocore
from datasets.filesystems import S3FileSystem
s3 = S3FileSystem()
# save train_dataset to s3
training_input_path = f's3://{sess.default_bucket()}/{s3_prefix}/train'
train_dataset.save_to_disk(training_input_path, fs=s3)
# save test_dataset to s3
test_input_path = f's3://{sess.default_bucket()}/{s3_prefix}/test'
test_dataset.save_to_disk(test_input_path, fs=s3)
```
## Configure checkpointing and spot training in the `HuggingFace` estimator
After we have uploaded we can configure our spot training and make sure we have checkpointing enabled to not lose any progress if interruptions happen.
To configure spot training we need to define the `max_wait` and `max_run` in the `HuggingFace` estimator and set `use_spot_instances` to `True`.
- `max_wait`: Duration in seconds until Amazon SageMaker will stop the managed spot training if not completed yet
- `max_run`: Max duration in seconds for training the training job
`max_wait` also needs to be greater than `max_run`, because `max_wait` is the duration for waiting/accessing spot instances (can take time when no spot capacity is free) + the expected duration of the training job.
**Example**
If you expect your training to take 3600 seconds (1 hour) you can set `max_run` to `4000` seconds (buffer) and `max_wait` to `7200` to include a `3200` seconds waiting time for your spot capacity.
```python
# enables spot training
use_spot_instances=True
# max time including spot start + training time
max_wait=7200
# expected training time
max_run=4000
```
To enable checkpointing we need to define `checkpoint_s3_uri` in the `HuggingFace` estimator. `checkpoint_s3_uri` is a S3 URI in which to save the checkpoints. By default Amazon SageMaker will save now any file, which is written to `/opt/ml/checkpoints` in the training job to `checkpoint_s3_uri`.
_It is possible to adjust `/opt/ml/checkpoints` by overwriting `checkpoint_local_path` in the `HuggingFace` estimator_
```python
# s3 uri where our checkpoints will be uploaded during training
base_job_name = "emotion-checkpointing"
checkpoint_s3_uri = f's3://{sess.default_bucket()}/{base_job_name}/checkpoints'
```
Next step is to create our `HuggingFace` estimator, provide our `hyperparameters` and add our spot and checkpointing configurations.
```python
from sagemaker.huggingface import HuggingFace
# hyperparameters, which are passed into the training job
hyperparameters={
'epochs': 1, # number of training epochs
'train_batch_size': 32, # batch size for training
'eval_batch_size': 64, # batch size for evaluation
'learning_rate': 3e-5, # learning rate used during training
'model_id':model_id, # pre-trained model id
'fp16': True, # Whether to use 16-bit (mixed) precision training
'output_dir':'/opt/ml/checkpoints' # make sure files are saved to the checkpoint directory
}
# create the Estimator
huggingface_estimator = HuggingFace(
entry_point = 'train.py', # fine-tuning script used in training jon
source_dir = './scripts', # directory where fine-tuning script is stored
instance_type = 'ml.p3.2xlarge', # instances type used for the training job
instance_count = 1, # the number of instances used for training
base_job_name = base_job_name, # the name of the training job
role = role, # Iam role used in training job to access AWS ressources, e.g. S3
transformers_version = '4.12.3', # the transformers version used in the training job
pytorch_version = '1.9.1', # the pytorch_version version used in the training job
py_version = 'py38', # the python version used in the training job
hyperparameters = hyperparameters, # the hyperparameter used for running the training job
use_spot_instances = use_spot_instances,# wether to use spot instances or not
max_wait = max_wait, # max time including spot start + training time
max_run = max_run, # max expected training time
checkpoint_s3_uri = checkpoint_s3_uri, # s3 uri where our checkpoints will be uploaded during training
)
```
When using remote S3 checkpointing you have to make sure that your `train.py` also supports checkpointing. `Transformers` and the `Trainer` offers utilities on how to do this. You only need to add the following snippet to your `Trainer` training script
```python
from transformers.trainer_utils import get_last_checkpoint
# check if checkpoint existing if so continue training
if get_last_checkpoint(args.output_dir) is not None:
logger.info("***** continue training *****")
last_checkpoint = get_last_checkpoint(args.output_dir)
trainer.train(resume_from_checkpoint=last_checkpoint)
else:
trainer.train()
```
## Run training on a spot instance
The last step of this example is to start our managed Spot Training. Therefore we simple call the `.fit` method of our estimator and provide our dataset.
```python
# define train data object
data = {
'train': training_input_path,
'test': test_input_path
}
# starting the train job with our uploaded datasets as input
huggingface_estimator.fit(data)
# Training seconds: 874
# Billable seconds: 262
# Managed Spot Training savings: 70.0%
```
After the training is successful run you should see your spot savings in the logs.
---
## Conclusion
We successfully managed to run a Managed Spot Training on Amazon SageMaker and save 70% off the training cost, which is a big margin. Especially we only needed to define 3 parameters to set it up.
I can highly recommend using Managed Spot Training if you have grace period in between model training and delivery.
If you want to learn more about Hugging Face Transformers on Amazon SageMaker you can checkout our [documentation](https://huggingface.co/docs/sagemaker/main) or other [examples](https://github.com/huggingface/notebooks/tree/master/sagemaker).
---
You can find the code [here](https://github.com/huggingface/notebooks/blob/master/sagemaker/05_spot_instances/sagemaker-notebook.ipynb).
Thanks for reading! If you have any questions, feel free to contact me, through [Github](https://github.com/huggingface/transformers), or on the [forum](https://discuss.huggingface.co/c/sagemaker/17). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Stable Diffusion with Hugging Face Inference Endpoints | https://www.philschmid.de/stable-diffusion-inference-endpoints | 2022-11-28 | [
"Diffusion",
"Inference",
"HuggingFace",
"Generation"
] | Learn how to deploy Stable Diffusion 2.0 on Hugging Face Inference Endpoints to generate images based from text. | Welcome to this Hugging Face Inference Endpoints guide on how to deploy [Stable Diffusion](https://huggingface.co/blog/stable_diffusion)
to generate images for a given input prompt. We will deploy [stabilityai/stable-diffusion-2](https://huggingface.co/stabilityai/stable-diffusion-2)
to Inference Endpoints for real-time inference using Hugging Faces [🧨 Diffusers library](https://huggingface.co/docs/diffusers/index).
![Stable Diffusion Inference endpoints](/static/blog/stable-diffusion-inference-endpoints/thumbnail.png)
Before we can get started, make sure you meet all of the following requirements:
1. An Organization/User with an active credit card. (Add billing [here](https://huggingface.co/settings/billing))
2. You can access the UI at: [https://ui.endpoints.huggingface.co](https://ui.endpoints.huggingface.co/endpoints)
The Tutorial will cover how to:
1. [Deploy Stable Diffusion as an Inference Endpoint](#1-deploy-stable-diffusion-as-an-inference-endpoint)
2. [Test & Generate Images with Stable Diffusion 2.0](#2-test--generate-images-with-stable-diffusion-20)
3. [Integrate Stable Diffusion as API and send HTTP requests using Python](#3-integrate-stable-diffusion-as-api-and-send-http-requests-using-python)
## What is Stable Diffusion?
Stable Diffusion is a text-to-image latent diffusion model created by researchers and engineers from [CompVis](https://github.com/CompVis), [Stability AI](https://stability.ai/), and [LAION](https://laion.ai/). It is trained on 512x512 images from a subset of the [LAION-5B](https://laion.ai/blog/laion-5b/) database. LAION-5B is the largest, freely accessible multi-modal dataset that currently exists.
This guide will not explain how the model works. If you are interested, you should check out the [Stable Diffusion with 🧨 Diffusers](https://huggingface.co/blog/stable_diffusion) blog post or [The Annotated Diffusion Model](https://huggingface.co/blog/annotated-diffusion)
![stable-diffusion-architecture](/static/blog/stable-diffusion-inference-endpoints/stable-diffusion.png)
## What are Hugging Face Inference Endpoints?
[🤗 Inference Endpoints](https://huggingface.co/inference-endpoints) offers a secure production solution to easily deploy Machine Learning models on dedicated and autoscaling infrastructure managed by Hugging Face.
A Hugging Face Inference Endpoint is built from a [Hugging Face Model Repository](https://huggingface.co/models). It supports all the [Transformers and Sentence-Transformers tasks as well as diffusers tasks](https://huggingface.co/docs/inference-endpoints/supported_tasks) and any arbitrary ML Framework through easy customization by adding a [custom inference handler.](https://huggingface.co/docs/inference-endpoints/guides/custom_handler) This [custom inference handler](https://huggingface.co/docs/inference-endpoints/guides/custom_handler) can be used to implement simple inference pipelines for ML Frameworks like Keras, Tensorflow, and sci-kit learn or to add custom business logic to your existing transformers pipeline.
## 1. Deploy Stable Diffusion as an Inference Endpoint
In this tutorial, you will learn how to deploy any Stable-Diffusion model from the [Hugging Face Hub](https://huggingface.co/models?other=stable-diffusion) to [Hugging Face Inference Endpoints](https://huggingface.co/inference-endpoints) and how to integrate it via an API into your products.
You can access the UI of Inference Endpoints directly at: [https://ui.endpoints.huggingface.co/](https://ui.endpoints.huggingface.co/) or through the [Landingpage](https://huggingface.co/inference-endpoints).
The first step is to deploy our model as an Inference Endpoint. Therefore we add the Hugging face repository Id of the Stable Diffusion model we want to deploy. In our case, it is `stabilityai/stable-diffusion-2`.
![repository](/static/blog/stable-diffusion-inference-endpoints/repository.png)
_Note: If the repository is not showing up in the search it might be gated, e.g. [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5). To deploy gated models you need to accept the terms on the model page. Additionally, it is currently only possible to deploy gated repositories from user accounts and not within organizations._
Now, we can make changes to the provider, region, or instance we want to use as well as configure the security level of our endpoint. The easiest is to keep the suggested defaults from the application.
![settings](/static/blog/stable-diffusion-inference-endpoints/settings.png)
We can deploy our model by clicking on the “Create Endpoint” button. Once we click the “create” button, Inference Endpoints will create a dedicated container with the model and start our resources. After a few minutes, our endpoint is up and running.
## 2. Test & Generate Images with Stable Diffusion 2.0
Before integrating the endpoint and model into our applications, we can demo and test the model directly in the [UI](https://ui.endpoints.huggingface.co/endpoints). Each Inference Endpoint comes with an inference widget similar to the ones you know from the [Hugging Face Hub](https://huggingface.co/).
We can provide a prompt for the image to be generated. Let's try `realistic render portrait of group of flying blue whales towards the moon, sci - fi, extremely detailed, digital painting`.
![detail-page](/static/blog/stable-diffusion-inference-endpoints/detail-page.png)
## 3. Integrate Stable Diffusion as API and send HTTP requests using Python
Hugging Face Inference endpoints can directly work with binary data, meaning we can directly send our prompt and get an image in return. We are going to use **`requests`** to send our requests and use `PIL` to save the generated images to disk. (make your you have it installed **`pip install request Pillow`**)
```python
import json
import requests as r
from io import BytesIO
from PIL import Image
ENDPOINT_URL="" # url of your endpoint
HF_TOKEN="" # token where you deployed your endpoint
def generate_image(prompt:str):
payload = {"inputs": prompt}
headers = {
"Authorization": f"Bearer {HF_TOKEN}",
"Content-Type": "application/json",
"Accept": "image/png" # important to get an image back
}
response = r.post(ENDPOINT_URL, headers=headers, json=payload)
img = Image.open(BytesIO(response.content))
return img
# define your prompt
prompt = "realistic render portrait realistic render portrait of group of flying blue whales towards the moon, intricate, toy, sci - fi, extremely detailed, digital painting, sculpted in zbrush, artstation, concept art, smooth, sharp focus, illustration, chiaroscuro lighting, golden ratio, incredible art by artgerm and greg rutkowski and alphonse mucha and simon stalenhag"
# generate image
image = generate_image(prompt)
# save to disk
image.save("generation.png")
```
![generation](/static/blog/stable-diffusion-inference-endpoints/generation.png)
We can also change the hyperparameter for the [Stable Diffusion pipeline](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion#diffusers.StableDiffusionPipeline) by providing the parameters in the `parameters` attribute when sending requests, below is an example JSON payload on how to generate a `768x768` image.
```json
{
"inputs": "realistic render portrait realistic render portrait of group of flying blue whales towards the moon, intricate, toy, sci - fi, extremely detailed, digital painting, sculpted in zbrush, artstation, concept art, smooth, sharp focus, illustration, chiaroscuro lighting, golden ratio, incredible art by artgerm and greg rutkowski and alphonse mucha and simon stalenhag",
"parameters": {
"width": 768,
"height": 768,
"guidance_scale": 9
}
}
```
---
Now, its yoru time be creative and generate some amazing images with Stable Diffusion 2.0 on [Inference Endpoints](https://huggingface.co/inference-endpoints).
Thanks for reading! If you have any questions, feel free to contact me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
My path to become a certified solution architect | https://www.philschmid.de/my-path-to-become-a-certified-solution-architect | 2020-10-24 | [
"AWS",
"Certificate",
"Cloud"
] | This is the Story of how I became a certified solution architect within 28 hours of preparation. | Hello, my name is Philipp and I am working as a machine learning engineer at a technology incubation startup. At work I
design and implement cloud-native machine learning architectures for fin-tech and insurance companies.
I started to work with AWS 2 1/2 years ago. Since then I had built many projects both privately and at work using AWS
Services. I like the Serverless services of aws the most. For me, Serverless first always applies.
In short, I have several years of professional and part-time experience with AWS, but no certificate. I know that
hands-on experience and knowledge are more important than certificates. But sometimes you need a sheet of paper to prove
that.
## Disclaimer
> This article won’t show how everyone gets certified in under 30h of studying. Instead, it should motivate others who
> have the same experience as me but are too lazy.
---
## The Certificate
So a few weeks ago I decided to do the "AWS Certified Solutions Architect - Associate SAA-C02" certificate.
The "AWS Solutions Architect – Associate SAA-C02" certificate validates the ability to architect and deploy dynamically
scalable, highly available, fault-tolerant, and reliable applications on AWS. The exam takes 130 minutes and consists of
65 questions
---
## Research
First I researched, what are the exam criteria. According to the
[AWS*Solution_Architect*-\_Associate_SAA-C02_Exam_Blue_Print](https://d1.awsstatic.com/training-and-certification/docs-sa-assoc/AWS-Certified-Solutions-Architect-Associate-Exam-Guide_v1.1_2019_08_27_FINAL.pdf)
it consists out of those 4 topics.
![SAA-C02-Content-Domain-Weights](/static/blog/my-path-to-become-a-certified-solution-architect/SAA-C02-Content-Domain-Weights.png)
[except from the PDF-Document](https://d1.awsstatic.com/training-and-certification/docs-sa-assoc/AWS-Certified-Solutions-Architect-Associate-Exam-Guide_v1.1_2019_08_27_FINAL.pdf)
Second, I researched online courses and summary/ cheat sheets. I quickly found an online course as I was already
familiar with [a cloud guru](https://acloudguru.com/). After some googling, I found
[Jayendra's Blog](https://jayendrapatil.com/aws-certified-solutions-architect-associate-saa-c02-exam-learning-path/),
which has a cheat sheet for almost every topic.
---
## Study
Initially, I watched the
["AWS Certified Solutions Architect Associate SAA-C02" course on a cloud guru](https://acloud.guru/learn/aws-certified-solutions-architect-associate).
Due to my experienced, I watched the course at 2x speed. Every episode that contained a lab have I implemented
independently afterwards.
After I completed the course I did one practice exam from a cloud guru and achieved 78%. Therefore I booked my practice
in 3 days. In these 3 days, I studied 4 hours a day, did 10 practice exams, and learned with the cheatsheets.
### Learning Path Overview
| Week | Course Chapter | Topic | Time invested |
| ---- | -------------- | ----------------------------------------- | ------------- |
| 1 | 1&2 | Introduction & 10,000-Foot Overview | 1/2h |
| 1 | 3 | Identity and Access Management & S3 | 2h |
| 2 | 4 | EC2 | 3h |
| 3 | 5 | Databases on AWS | 2h |
| 3 | 6 | Advanced IAM | 1/2h |
| 4 | 7 | Route 53 | 1h |
| 4 | 8 | VPCs | 2h |
| 5 | 9 | HA Architecture | 2h |
| 6 | 10 | Applications | 1h |
| 6 | 11 | Security | 1/2h |
| 6 | 12 | Serverless | 1h |
| 7 | - | 10x Practice Exams & studying cheatsheets | 12h |
In total it took me 7 Weeks and ~28h to complete the "AWS Certified Solutions Architect - Associate SAA-C02"
certificate.
---
## Exam
On the day of the exam, I took the last practice exam and did every quiz from the a cloud guru course. I took my exam at
home through PearsonVue. As an exemplary examinee, I fulfilled all requirements in advance and was ready for my exam.
The check-in was very easy and the Instructor was very kind. It took around 10 minutes to start.
It took me 80 minutes to finish and pass the exam.
![My AWS Certificate](/static/blog/my-path-to-become-a-certified-solution-architect/certificate.png)
---
## Learnings
Practice exams are good, but due to the fact that AWS is developing so fast, they don´t have to be accurate. In my case,
I had around 8–10 Questions about storage gateway, efs, fsx and in the practice exams way less of them. My learning from
this is I should rather do one less practice exam and read some more documentation.
My second learning is that the confirmation of your abilities is something great to be proud of.
---
Thanks for reading and a special Thanks to
[Jayendra's Blog](https://jayendrapatil.com/aws-certified-solutions-architect-associate-saa-c02-exam-learning-path/) and
[a cloud guru](https://acloudguru.com/).
If you have any questions, feel free to contact me. You can connect with me on
[Twitter](https://twitter.com/_philschmid) and [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/) or
write me an [email](https://www.philschmid.de). |
MLOps: Using the Hugging Face Hub as model registry with Amazon SageMaker | https://www.philschmid.de/huggingface-hub-amazon-sagemaker | 2021-11-16 | [
"AWS",
"BERT",
"HuggingFace",
"Sagemaker"
] | Learn how to automatically save your model weights, logs, and artifacts to the Hugging Face Hub using Amazon SageMaker and how to deploy the model afterwards for inference. | The [Hugging Face Hub](hf.co/models) is the largest collection of models, datasets, and metrics in order to democratize and advance AI for everyone 🚀. The Hugging Face Hub works as a central place where anyone can share and explore models and datasets.
In this blog post you will learn how to automatically save your model weights, logs, and artifacts to the Hugging Face Hub using Amazon SageMaker and how to deploy the model afterwards for inference. 🏎
This will allow you to use the Hugging Face Hub as the backbone of your model-versioning, -storage & -management 👔
You will be able to easily share your models inside your own private organization or with the whole Hugging Face community without heavy lifting due to [build in permission and access control features](https://huggingface.co/docs/hub/security).🔒
![huggingface-sagemaker-hub](/static/blog/huggingface-hub-amazon-sagemaker/huggingface-sagemaker-hub.png)
---
In this demo, we will use the Hugging Faces `transformers` and `datasets` library together with a custom Amazon sagemaker-sdk extension to fine-tune a pre-trained transformer for multi-class text classification. In particular, the pre-trained model will be fine-tuned using the `emotion` dataset. To get started, we need to set up the environment with a few prerequisite steps, for permissions, configurations, and so on.
_**NOTE: You can run this demo in Sagemaker Studio, your local machine or Sagemaker Notebook Instances**_
## Development Environment and Permissions
_*Note:* we only install the required libraries from Hugging Face and AWS. You also need PyTorch or Tensorflow, if you haven´t it installed_
```python
!pip install "sagemaker>=2.69.0" "transformers==4.12.3" --upgrade
# using older dataset due to incompatibility of sagemaker notebook & aws-cli with > s3fs and fsspec to >= 2021.10
!pip install "datasets==1.13" --upgrade
```
```python
import sagemaker
assert sagemaker.__version__ >= "2.69.0"
```
## Permissions
_If you are going to use Sagemaker in a local environment. You need access to an IAM Role with the required permissions for Sagemaker. You can find [here](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) more about it._
```python
import sagemaker
sess = sagemaker.Session()
# sagemaker session bucket -> used for uploading data, models and logs
# sagemaker will automatically create this bucket if it not exists
sagemaker_session_bucket=None
if sagemaker_session_bucket is None and sess is not None:
# set to default bucket if a bucket name is not given
sagemaker_session_bucket = sess.default_bucket()
role = sagemaker.get_execution_role()
sess = sagemaker.Session(default_bucket=sagemaker_session_bucket)
print(f"sagemaker role arn: {role}")
print(f"sagemaker bucket: {sess.default_bucket()}")
print(f"sagemaker session region: {sess.boto_region_name}")
```
## Preprocessing
We are using the `datasets` library to download and preprocess the `emotion` dataset. After preprocessing, the dataset will be uploaded to our `sagemaker_session_bucket` to be used within our training job. The [emotion](https://github.com/dair-ai/emotion_dataset) dataset consists of 16000 training examples, 2000 validation examples, and 2000 testing examples.
### Tokenization
```python
from datasets import load_dataset
from transformers import AutoTokenizer
# tokenizer used in preprocessing
tokenizer_name = 'distilbert-base-uncased'
# dataset used
dataset_name = 'emotion'
# s3 key prefix for the data
s3_prefix = 'samples/datasets/emotion'
```
```python
# download tokenizer
tokenizer = AutoTokenizer.from_pretrained(tokenizer_name)
# tokenizer helper function
def tokenize(batch):
return tokenizer(batch['text'], padding='max_length', truncation=True)
# load dataset
train_dataset, test_dataset = load_dataset(dataset_name, split=['train', 'test'])
# tokenize dataset
train_dataset = train_dataset.map(tokenize, batched=True)
test_dataset = test_dataset.map(tokenize, batched=True)
# set format for pytorch
train_dataset = train_dataset.rename_column("label", "labels")
train_dataset.set_format('torch', columns=['input_ids', 'attention_mask', 'labels'])
test_dataset = test_dataset.rename_column("label", "labels")
test_dataset.set_format('torch', columns=['input_ids', 'attention_mask', 'labels'])
```
## Uploading data to `sagemaker_session_bucket`
After we processed the `datasets` we are going to use the new `FileSystem` [integration](https://huggingface.co/docs/datasets/filesystems.html) to upload our dataset to S3.
```python
import botocore
from datasets.filesystems import S3FileSystem
s3 = S3FileSystem()
# save train_dataset to s3
training_input_path = f's3://{sess.default_bucket()}/{s3_prefix}/train'
train_dataset.save_to_disk(training_input_path, fs=s3)
# save test_dataset to s3
test_input_path = f's3://{sess.default_bucket()}/{s3_prefix}/test'
test_dataset.save_to_disk(test_input_path, fs=s3)
```
## Creating an Estimator and start a training job
List of supported models: https://huggingface.co/models?library=pytorch,transformers&sort=downloads
### setting up `push_to_hub` for our model.
The `train.py` scripts implements the `push_to_hub` using the `Trainer` and `TrainingArguments`. To push our model to the [Hub](https://huggingface.co/models) we need to define the `push_to_hub`. hyperparameter and set it to `True` and provide out [Hugging Face Token](https://hf.co/settings/token). Additionally, we can configure the repository name and saving strategy using the `hub_model_id`, `hub_strategy`.
You can find documentation to those parameters [here](https://huggingface.co/transformers/main_classes/trainer.html).
We are going to provide our HF Token securely with out exposing it to the public using `notebook_login` from the `huggingface_hub` SDK.
But be careful your token will still be visible insight the logs of the training job. If you run `huggingface_estimator.fit(...,wait=True)` you will see the token in the logs.
A better way of providing your `HF_TOKEN` to your training jobs would be using [AWS Secret Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html)
### You can also directly find your token at [https://hf.co/settings/token](https://hf.co/settings/token).
```python
from huggingface_hub import notebook_login
notebook_login()
```
Now we can use the `HfFolder.get_token()` to dynamically load our Token from disk and use it as Hyperparameter. The `train.py` script can be found in the [Github repository](https://github.com/huggingface/notebooks/blob/master/sagemaker/14_train_and_push_to_hub/scripts/train.py).
```python
from sagemaker.huggingface import HuggingFace
from huggingface_hub import HfFolder
import time
# hyperparameters, which are passed into the training job
hyperparameters={'epochs': 1, # number of training epochs
'train_batch_size': 32, # batch size for training
'eval_batch_size': 64, # batch size for evaluation
'learning_rate': 3e-5, # learning rate used during training
'model_id':'distilbert-base-uncased', # pre-trained model
'fp16': True, # Whether to use 16-bit (mixed) precision training
'push_to_hub': True, # Defines if we want to push the model to the hub
'hub_model_id': 'sagemaker-distilbert-emotion', # The model id of the model to push to the hub
'hub_strategy': 'every_save', # The strategy to use when pushing the model to the hub
'hub_token': HfFolder.get_token() # HuggingFace token to have permission to push
}
# define Training Job Name
job_name = f'push-to-hub-sample-{time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime())}'
# create the Estimator
huggingface_estimator = HuggingFace(
entry_point = 'train.py', # fine-tuning script used in training jon
source_dir = './scripts', # directory where fine-tuning script is stored
instance_type = 'ml.p3.2xlarge', # instances type used for the training job
instance_count = 1, # the number of instances used for training
base_job_name = job_name, # the name of the training job
role = role, # Iam role used in training job to access AWS ressources, e.g. S3
transformers_version = '4.12', # the transformers version used in the training job
pytorch_version = '1.9', # the pytorch_version version used in the training job
py_version = 'py38', # the python version used in the training job
hyperparameters = hyperparameters, # the hyperparameter used for running the training job
)
```
After we defined our Hyperparameter and Estimator we can start the training job.
```python
# define a data input dictonary with our uploaded s3 uris
data = {
'train': training_input_path,
'test': test_input_path
}
# starting the train job with our uploaded datasets as input
# setting wait to False to not expose the HF Token
huggingface_estimator.fit(data, wait=False)
```
Since we set `wait=False` to hide the logs we can use a `waiter` to see when our training job is done.
```python
# adding waiter to see when training is done
waiter = huggingface_estimator.sagemaker_session.sagemaker_client.get_waiter('training_job_completed_or_stopped')
waiter.wait(TrainingJobName=huggingface_estimator.latest_training_job.name)
```
## Accessing the model on [hf.co/models](hf.co/models)
we can access the model on [hf.co/models](https://hf.co/models) using the `hub_model_id` and our username.
```python
from huggingface_hub import HfApi
whoami = HfApi().whoami()
username = whoami['name']
print(f"https://huggingface.co/{username}/{hyperparameters['hub_model_id']}")
# https://huggingface.co/philschmid/sagemaker-distilbert-emotion
```
## Deploying the model from Hugging Face to a SageMaker Endpoint
To deploy our model to Amazon SageMaker we can create a `HuggingFaceModel` and provide the Hub configuration (`HF_MODEL_ID` & `HF_TASK`) to deploy it. Alternatively, we can use the the `hugginface_estimator` to deploy our model from S3 with `huggingface_estimator.deploy()`.
```python
from sagemaker.huggingface import HuggingFaceModel
import sagemaker
role = sagemaker.get_execution_role()
# Hub Model configuration. https://huggingface.co/models
hub = {
'HF_MODEL_ID':f"{username}/{hyperparameters['hub_model_id']}",
'HF_TASK':'text-classification'
}
# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
transformers_version='4.12',
pytorch_version='1.9',
py_version='py38',
env=hub,
role=role,
)
# deploy model to SageMaker Inference
predictor = huggingface_model.deploy(
initial_instance_count=1, # number of instances
instance_type='ml.m5.xlarge' # ec2 instance type
)
```
Then, we use the returned predictor object to call the endpoint.
```python
sentiment_input= {"inputs": "Winter is coming and it will be dark soon."}
predictor.predict(sentiment_input)
```
Finally, we delete the inference endpoint.
```python
predictor.delete_endpoint()
```
# Conclusion
With the `push_to_hub` integration of the `Trainer API` we were able to automatically push our model weights and logs based on the `hub_strategy` to the Hugging Face Hub. With this we benefit from automatic model versioning through the git system and [build in permission and access control features](https://huggingface.co/docs/hub/security).
The combination of using Amazon SageMaker with the Hugging Face Hub allows Machine Learning Teams to easily collaborate across Regions and Accounts using the private and secure Organization to manage, monitor and deploy their own models into production.
---
You can find the code [here](https://github.com/huggingface/notebooks/blob/master/sagemaker/14_train_and_push_to_hub/sagemaker-notebook.ipynb) and feel free open a thread the [forum](https://discuss.huggingface.co/).
Thanks for reading. If you have any questions, feel free to contact me, through [Github](https://github.com/huggingface/transformers), or on the [forum](https://discuss.huggingface.co/c/sagemaker/17). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
AWS Lambda with custom docker images as runtime | https://www.philschmid.de/aws-lambda-with-custom-docker-image | 2020-12-02 | [
"AWS",
"Serverless",
"Docker"
] | Learn how to build and deploy an AWS Lambda function with a custom python docker container as runtime with the use of Amazon ECR. | It's the most wonderful time of the year. Of course, I'm not talking about Christmas but re:Invent. It is re:Invent
time. Due to the current situation in the world, re:Invent does not take place like every year in Las Vegas but is
entirely virtual and for free. This means that it is possible for everyone to attend. In addition to this, this year it
lasts 3 weeks from 30.11.2020 to 18.12.2020. If you haven´t already registered do it
[here](https://virtual.awsevents.com/).
![meme](/static/blog/aws-lambda-with-custom-docker-image/meme.png)
In the opening keynote, Andy Jassy presented the AWS Lambda Container Support, which allows you to use custom container
(docker) images as a runtime for AWS Lambda. With that, we can build runtimes larger than the previous 250 MB limit, be
it for "State-of-the-Art" NLP APIs with BERT or complex processing.
![screenhsot-andy-jessy](/static/blog/aws-lambda-with-custom-docker-image/reinvent.png)
photo from the keynote by Andy Jassy, rights belong to Amazon
Furthermore, you can now configure AWS Lambda functions with up to
[10 GB of Memory and 6 vCPUs](https://aws.amazon.com/de/blogs/aws/new-for-aws-lambda-functions-with-up-to-10-gb-of-memory-and-6-vcpus/?nc1=b_rp).
In [their blog](https://aws.amazon.com/de/blogs/aws/new-for-aws-lambda-container-image-support/?nc1=b_rp) post, Amazon
explains how to use containers as a runtime for AWS lambda via the console.
But the blog post does not explain how to use custom `docker` images with the Serverless Application Model. For these
circumstances, I created this blog post.
---
## Services included in this tutorial
### AWS Lambda
[AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a serverless computing service that lets you
run code without managing servers. It executes your code only when required and scales automatically, from a few
requests per day to thousands per second.
### Amazon Elastic Container Registry
[Amazon Elastic Container Registry (ECR)](https://aws.amazon.com/ecr/?nc1=h_ls) is a fully managed container registry.
It allows us to store, manage, share docker container images. You can share docker containers privately within your
organization or publicly worldwide for anyone.
### AWS Serverless Application Model
The [AWS Serverless Application Model (SAM)](https://aws.amazon.com/serverless/sam/) is an open-source framework and CLI
to build serverless applications on AWS. You define the application you want using `yaml` format. Afterwards, you build,
test, and deploy using the SAM CLI.
---
## Tutorial
We are going to build an AWS Lambda with a `docker` container as runtime using the "AWS Serverless Application Model".
We create a new custom `docker` image using the presented Lambda Runtime API images.
**What are we going to do:**
- Install and setup `sam`
- Create a custom `docker` image
- Deploy a custom `docker` image to ECR
- Deploy AWS Lambda function with a custom `docker` image
You can find the complete code in this [Github repository.](https://github.com/philschmid/aws-lambda-with-docker-image)
---
## Install and setup `sam`
AWS provides a
[5 step guide on how to install](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-install-linux.html)
`sam`. In this tutorial, we are going to skip steps 1-3 and assume you already have an AWS Account, an IAM user with the
correct permission set up, and `docker` installed and setup otherwise check out this
[link](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-install-linux.html).
The easiest way is to create an IAM user with `AdministratorAccess` (but I don´t recommend this for production use
cases).
We are going to continue with step 4 _"installing Homebrew"._ To install homebrew we run the following command in our
terminal.
```python
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"
```
_Note: Linux Users have to add Homebrew to your PATH by running the following commands._
```python
test -d ~/.linuxbrew && eval $(~/.linuxbrew/bin/brew shellenv)
test -d /home/linuxbrew/.linuxbrew && eval $(/home/linuxbrew/.linuxbrew/bin/brew shellenv)
test -r ~/.bash_profile && echo "eval \$($(brew --prefix)/bin/brew shellenv)" >>~/.bash_profile
echo "eval \$($(brew --prefix)/bin/brew shellenv)" >>~/.profile
```
Afterwards we can run `brew --version` to verify that Homebrew is installed.
```python
brew --version
```
The fifth and last step is to install `sam` using homebrew. We can install the SAM CLI using `brew install`.
```python
brew tap aws/tap
brew install aws-sam-cli
```
After we installed it we have to make sure we have atleast version `1.13.0` installed
```python
sam --version
# SAM CLI, version 1.13.0
```
To update `sam` if you have it installed you can run `brew upgrade aws-sam-cli`.
```python
brew upgrade aws-sam-cli
```
---
## Create a custom `docker` image
After the setup, we are going to build a custom python `docker` image.
We create a `app.py` file and paste the following code into it.
```python
import json
def handler(event, context):
body = {
"message": "Go Serverless v1.0! Your function executed successfully!",
"input": event
}
response = {
"statusCode": 200,
"body": json.dumps(body)
}
return response
```
To containerize our Lambda Function, we create a `dockerfile` in the same directory and copy the following content.
```bash
FROM public.ecr.aws/lambda/python:3.8
# Copy function code
COPY app.py ${LAMBDA_TASK_ROOT}
# Set the CMD to your handler (could also be done as a parameter override outside of the Dockerfile)
CMD [ "app.handler" ]
```
Additionally we can add a `.dockerignore` file to exclude files from your container image.
```bash
Dockerfile
README.md
*.pyc
*.pyo
*.pyd
__pycache__
.pytest_cache
events
```
To build our custom `docker` image we run.
```bash
docker build -t docker-lambda .
```
and then to test it we run
```bash
docker run -d -p 8080:8080 docker-lambda
```
Afterwards, in a separate terminal, we can then locally invoke the function using `curl`.
```
curl -XPOST "http://localhost:8080/2015-03-31/functions/function/invocations" -d '{"payload":"hello world!"}'
```
---
## Deploy a custom `docker` image to ECR
Since we now have a local `docker` image we can deploy this to ECR. Therefore we need to create an ECR repository with
the name `docker-lambda`.
```bash
aws ecr create-repository --repository-name docker-lambda
```
**using AWS CLI V1.x**
To be able to push our images we need to login to ECR. We run an output (`$()`) from the command we retrieve from
`ecr get-login`. (Yes, the `$` is intended).
```bash
$(aws ecr get-login --no-include-email --region eu-central-1)
```
**using AWS CLI V2.x**
```bash
aws_region=eu-central-1
aws_account_id=891511646143
aws ecr get-login-password \
--region $aws_region \
| docker login \
--username AWS \
--password-stdin $aws_account_id.dkr.ecr.$aws_region.amazonaws.com
```
read more [here](https://github.com/aws/containers-roadmap/issues/735).
Next we need to `tag` / rename our previously created image to an ECR format. The format for this is
`{AccountID}.dkr.ecr.{region}.amazonaws.com/{repository-name}`
```bash
docker tag docker-lambda $aws_account_id.dkr.ecr.$aws_region.amazonaws.com/docker-lambda
```
To check if it worked we can run `docker images` and should see an image with our tag as name
![docker-image](/static/blog/aws-lambda-with-custom-docker-image/docker-image.png)
Finally, we push the image to ECR Registry.
```bash
docker push 891511646143.dkr.ecr.eu-central-1.amazonaws.com/docker-lambda
```
---
## Deploy AWS Lambda function with a custom `docker` image
Now, we can create our `template.yaml` to define our lambda function using our `docker` image. In the `template.yaml` we
include the configuration for our AWS Lambda function. I provide the complete `template.yaml`for this example, but we go
through all the details we need for our `docker` image and leave out all standard configurations. If you want to learn
more about the `sam template.yaml`, you can read through the documentation
[here](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-specification-resources-and-properties.html).
```yaml
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: serverless-aws-lambda-custom-docker
# More info about Globals: https://github.com/awslabs/serverless-application-model/blob/master/docs/globals.rst
Globals:
Function:
Timeout: 3
Resources:
MyCustomDocker:
Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
Properties:
FunctionName: MyCustomDocker
ImageUri: 891511646143.dkr.ecr.eu-central-1.amazonaws.com/docker-lambda:latest
PackageType: Image
Events:
HelloWorld:
Type: Api # More info about API Event Source: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api
Properties:
Path: /hello
Method: get
Outputs:
# ServerlessRestApi is an implicit API created out of Events key under Serverless::Function
# Find out more about other implicit resources you can reference within SAM
# https://github.com/awslabs/serverless-application-model/blob/master/docs/internals/generated_resources.rst#api
MyCustomDockerApi:
Description: 'API Gateway endpoint URL for Prod stage for Hello World function'
Value: !Sub 'https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/hello/'
```
To use a `docker` image in our `template.yaml` we have to include the parameters `ImageUri` and `PackageType` in our
`AWS::Serverless::Function` resource. The `ImageUri`, as the name suggests is the URL to our docker image. For an ECR
image, the URL looks like this `123456789.dkr.ecr.us-east-1.amazonaws.com/myimage:latest`, and for a public docker image
like that `namespace/image:tag` or `docker.io/namespace/image:tag`.
`PackageType` defines the type we provide to our AWS Lambda function, in our case an `Image`.
Afterwards, we can deploy our application again using `sam deploy` and thats it.
```bash
sam deploy --guided
```
The Guided deployment will walk through all required parameters and will create a `samconfig.toml` afterwards for us.
![sam-deployment](/static/blog/aws-lambda-with-custom-docker-image/sam-deployment.png)
After the successfull deployment we should see something like this.
![deployment-result](/static/blog/aws-lambda-with-custom-docker-image/deployment-result.png)
We take the URL from our API Gateway from the `Outputs` section and use any REST Client to test it.
![insomnia](/static/blog/aws-lambda-with-custom-docker-image/insomnia.png)
It worked. 🚀
We successfully deployed and created an AWS Lambda function with a custom `docker` image as runtime.
---
## Conclusion
The release of the AWS Lambda Container Support enables much wider use of AWS Lambda and Serverless. It fixes many
existing problems and gives us greater scope for the deployment of serverless applications.
Another area in which I see great potential is machine learning, as the custom runtime enables us to include larger
machine learning models in our runtimes. The increase of configurable Memory and vCPUs boost this even more.
The future looks more than golden for AWS Lambda and Serverless.
---
You can find the [GitHub repository](https://github.com/philschmid/aws-lambda-with-docker-image) with the complete code
[here.](https://github.com/philschmid/aws-lambda-with-docker-image)
Thanks for reading. If you have any questions, feel free to contact me or comment on this article. You can also connect
with me on [Twitter](https://twitter.com/_philschmid) or
[LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Outperform OpenAI GPT-3 with SetFit for text-classification | https://www.philschmid.de/getting-started-setfit | 2022-10-18 | [
"GPT3",
"HuggingFace",
"Transformers",
"SetFit"
] | Learn how to use SetFit to create a text-classification model with only a `8` labeled samples per class, or `32` samples in total. You will also learn how to improve your model by using hyperparamter tuning. | In many Machine Learning applications, the amount of available labeled data is a barrier to producing a high-performing model.
In the last 2 years developments have shown that you can overcome this data limitation by using Large Language Models, like [OpenAI GPT-3](https://openai.com/blog/gpt-3-apps/) together wit a _few_ examples as prompts at inference time to achieve good results. These developments are improving the missing labeled data situation but are introducing a new problem, which is the access and cost of Large Language Models.
But a group of research led by [Intel Labs](https://www.intel.com/content/www/us/en/research/overview.html) and the [UKP Lab](https://www.informatik.tu-darmstadt.de/ukp/ukp_home/index.en.jsp), [Hugging Face](https://huggingface.co/) released an new approach, called "SetFit" (https://arxiv.org/abs/2209.11055), that can be used to create high accuracte text-classification models with limited labeled data.
SetFit is outperforming GPT-3 in 7 out of 11 tasks, while being 1600x smaller.
![setfit-vs-gpt3](/static/blog/getting-started-setfit/setfit-vs-gpt3.png)
---
In this blog, you will learn how to use [SetFit](https://github.com/huggingface/setfit) to create a text-classification model with only a `8` labeled samples per class, or `32` samples in total. You will also learn how to improve your model by using hyperparamter tuning.
You will learn how to:
1. [Setup Development Environment](#1-setup-development-environment)
2. [Create Dataset](#2-create-dataset)
3. [Fine-Tune Classifier with SetFit](#3-fine-tune-classifier-with-setfit)
4. [Use Hyperparameter search to optimize results](#4-use-hyperparameter-search-to-optimize-result)
## Why SetFit is better
Compared to other few-shot learning methods, SetFit has several unique features:
🗣 No prompts or verbalisers: Current techniques for few-shot fine-tuning require handcrafted prompts. SetFit dispenses with prompts altogether by generating rich embeddings directly from text examples.
🏎 Fast to train: SetFit doesn't require large-scale models like T0 or GPT-3 to achieve high accuracy.
🌎 Multilingual support: SetFit can be used with any Sentence Transformer on the Hub.
---
Now we know why SetFit is amazing, let's get started. 🚀
_Note: This tutorial was created and run on a g4dn.xlarge AWS EC2 Instance including a NVIDIA T4._
## 1. Setup Development Environment
Our first step is to install the Hugging Face Libraries, including SetFit. Running the following cell will install all the required packages.
```python
%pip install setfit[optuna]==0.3.0 datasets -U
```
## 2. Create Dataset
We are going to use the [ag_news](https://huggingface.co/datasets/ag_news) dataset, which a news article classification dataset with `4` classes: World (0), Sports (1), Business (2), Sci/Tech (3).
The test split of the dataset contains `7600` examples, which is will be used to evaluate our model. The train split contains `120000` examples, which is a nice amount of data for fine-tuning a regular model.
But to shwocase SetFit, we wanto to create a dataset with only a `8` labeled samples per class, or `32` data points.
```python
from datasets import load_dataset,concatenate_datasets
# Load the dataset
dataset = load_dataset("ag_news")
# create train dataset
seed=20
labels = 4
samples_per_label = 8
sampled_datasets = []
# find the number of samples per label
for i in range(labels):
sampled_datasets.append(dataset["train"].filter(lambda x: x["label"] == i).shuffle(seed=seed).select(range(samples_per_label)))
# concatenate the sampled datasets
train_dataset = concatenate_datasets(sampled_datasets)
# create test dataset
test_dataset = dataset["test"]
```
## 3. Fine-Tune Classifier with SetFit
When using SetFit we first fine-tune a Sentence Transformer model using our labeled data and contrastive training, where positive and negative pairs are created by in-class and out-class selection.
The second step a classification head is trained on the encoded embeddings with their respective class labels.
![setfit_diagram_process](/static/blog/getting-started-setfit/setfit_diagram_process.png)
As Sentence Transformers we are going to use [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2). (you could replace the model with any available sentence transformer on hf.co).
The Python [SetFit](https://github.com/huggingface/setfit) package is implementing useful classes and functions to make the fine-tuning process straightforward and easy. Similar to the Hugging Face [Trainer](https://huggingface.co/transformers/main_classes/trainer.html) class, SetFits implmenets the `SetFitTrainer` class is responsible for the training loop.
```python
from setfit import SetFitModel, SetFitTrainer
from sentence_transformers.losses import CosineSimilarityLoss
# Load a SetFit model from Hub
model_id = "sentence-transformers/all-mpnet-base-v2"
model = SetFitModel.from_pretrained(model_id)
# Create trainer
trainer = SetFitTrainer(
model=model,
train_dataset=train_dataset,
eval_dataset=test_dataset,
loss_class=CosineSimilarityLoss,
metric="accuracy",
batch_size=64,
num_iterations=20, # The number of text pairs to generate for contrastive learning
num_epochs=1, # The number of epochs to use for constrastive learning
)
# Train and evaluate
trainer.train()
metrics = trainer.evaluate()
print(f"model used: {model_id}")
print(f"train dataset: {len(train_dataset)} samples")
print(f"accuracy: {metrics['accuracy']}")
# model used: sentence-transformers/all-mpnet-base-v2
# train dataset: 32 samples
# accuracy: 0.8647368421052631
```
## 4. Use Hyperparameter search to optimize result
The `SetFitTrainer` provides a `hyperparameter_search()` method that you can use to find the perefect hyperparameters for the data. `SetFit` is leveraging `optuna` under the hood to perform the hyperparameter search. To use the hyperparameter search, we need to define a `model_init` method, which creates our model for every "run" and a `hp_space` method that defines the hyperparameter search space.
```python
from setfit import SetFitModel, SetFitTrainer
from sentence_transformers.losses import CosineSimilarityLoss
# model specfic hyperparameters
def model_init(params):
params = params or {}
max_iter = params.get("max_iter", 100)
solver = params.get("solver", "liblinear")
model_id = params.get("model_id", "sentence-transformers/all-mpnet-base-v2")
model_params = {
"head_params": {
"max_iter": max_iter,
"solver": solver,
}
}
return SetFitModel.from_pretrained(model_id, **model_params)
# training hyperparameters
def hp_space(trial):
return {
"learning_rate": trial.suggest_float("learning_rate", 1e-6, 1e-4, log=True),
"num_epochs": trial.suggest_int("num_epochs", 1, 5),
"batch_size": trial.suggest_categorical("batch_size", [4, 8, 16, 32]),
"num_iterations": trial.suggest_categorical("num_iterations", [5, 10, 20, 40, 80]),
"seed": trial.suggest_int("seed", 1, 40),
"max_iter": trial.suggest_int("max_iter", 50, 300),
"solver": trial.suggest_categorical("solver", ["newton-cg", "lbfgs", "liblinear"]),
"model_id": trial.suggest_categorical(
"model_id",
[
"sentence-transformers/all-mpnet-base-v2",
"sentence-transformers/all-MiniLM-L12-v1",
],
),
}
trainer = SetFitTrainer(
train_dataset=train_dataset,
eval_dataset=test_dataset,
model_init=model_init,
)
best_run = trainer.hyperparameter_search(direction="maximize", hp_space=hp_space, n_trials=100)
```
After running `100` trials (runs) the bes model was found with the following hyperparameters:
```
{'learning_rate': 2.2041595048800003e-05, 'num_epochs': 2, 'batch_size': 64, 'num_iterations': 20, 'seed': 34, 'max_iter': 182, 'solver': 'lbfgs', 'model_id': 'sentence-transformers/all-mpnet-base-v2'}
```
Achieving an accuracy of `0.873421052631579`, which is 1.1% better than the model we trained without hyperparameter search.
```python
best_run.hyperparameters
```
After, we have found the perfect hyperparameters we need to run a last training using those.
```python
trainer.apply_hyperparameters(best_run.hyperparameters, final_model=True)
trainer.train()
metrics = trainer.evaluate()
print(f"model used: {best_run.hyperparameters['model_id']}")
print(f"train dataset: {len(train_dataset)} samples")
print(f"accuracy: {metrics['accuracy']}")
# model used: sentence-transformers/all-mpnet-base-v2
# train dataset: 32 samples
# accuracy: 0.873421052631579
```
## Conclusion
Thats it, we have created a high-performing text-classification model with only `32` labeled samples or 8 samples per class using the SetFit approach. Our SetFit classifier achieved an accuracy of `0.873421052631579` on the test set. For comparison a regular model fine-tuned on the whole dataset (`12 000`) achieves a performance [~94%](https://huggingface.co/fabriceyhc/bert-base-uncased-ag_news) accuracy.
![result](/static/blog/getting-started-setfit/result.png)
This means you with 375x less data you lose only ~7% accuracy. 🤯
This is huge! SetFit will help so many company to get started with text-classification and transformers, without the need to label a lot of data and compute power. Compared to LLM training s SetFit classifier takes less than 1 hour on a small GPU (NIVIDA T4) to train or less than $1 so to speak.
---
Thanks for reading! If you have any questions, feel free to contact me, through [Github](https://github.com/huggingface/transformers), or on the [forum](https://discuss.huggingface.co/c/optimum/59). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Serverless BERT with HuggingFace and AWS Lambda | https://www.philschmid.de/serverless-bert-with-huggingface-and-aws-lambda | 2020-06-30 | [
"AWS",
"Serverless",
"Bert"
] | Build a serverless question-answering API with BERT, HuggingFace, the Serverless Framework and AWS Lambda. | "Serverless" and "BERT" are two topics that strongly influenced the world of computing.
[Serverless architecture](https://hackernoon.com/what-is-serverless-architecture-what-are-its-pros-and-cons-cc4b804022e9)
allows us to provide dynamically scale-in and -out the software without managing and provisioning computing power.
[It allows us, developers, to focus on our applications](https://www.cloudflare.com/learning/serverless/why-use-serverless/).
BERT is probably the most known NLP model out there. You can say it changed the way we work with textual data and what
we can learn from it.
_["BERT will help [Google] Search [achieve a] better understand[ing] one in 10 searches"](https://www.blog.google/products/search/search-language-understanding-bert/)_.
BERT and its fellow friends RoBERTa, GPT-2, ALBERT, and T5 will drive business and business ideas in the next few years
and will change/disrupt business areas like the internet once did.
![google-search-bert](/static/blog/serverless-bert-with-huggingface-and-aws-lambda/google-search-bert.png)
[search language understanding BERT](https://www.blog.google/products/search/search-language-understanding-bert/)
Imagine the business value you achieve combining these two together. But BERT is not the easiest machine learning model
to be deployed in a serverless architecture. BERT is quite big and needs quite some computing power. Most tutorials you
find online demonstrate how to deploy BERT in "easy" environments like a VM with 16GB of memory and 4 CPUs.
I will show you how to leverage the benefits of serverless architectures and deploy a BERT Question-Answering API in a
serverless environment. We are going to use the [Transformers](https://github.com/huggingface/transformers) library by
HuggingFace, the [Serverless Framework](https://serverless.com/), and AWS Lambda.
---
## Transformer Library by Huggingface
![transformers-logo](/static/blog/serverless-bert-with-huggingface-and-aws-lambda/transformers-logo.png)
The [Transformers library](https://github.com/huggingface/transformers) provides state-of-the-art machine learning
architectures like BERT, GPT-2, RoBERTa, XLM, DistilBert, XLNet, T5 for Natural Language Understanding (NLU) and Natural
Language Generation (NLG). It also provides thousands of pre-trained models in 100+ different languages and is deeply
interoperability between PyTorch & TensorFlow 2.0. It enables developers to fine-tune machine learning models for
different NLP-tasks like text classification, sentiment analysis, question-answering, or text generation.
---
## AWS Lambda
[AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a serverless computing service that lets you
run code without managing servers. It executes your code only when required and scales automatically, from a few
requests per day to thousands per second. You only pay for the compute time you consume – there is no charge when your
code is not running.
![aws-lambda-logo](/static/blog/serverless-bert-with-huggingface-and-aws-lambda/lambda-logo.png)
---
## Serverless Framework
The Serverless Framework helps us develop and deploy AWS Lambda functions. It’s a CLI that offers structure, automation,
and best practices right out of the box. It also allows us to focus on building sophisticated, event-driven, serverless
architectures, comprised of functions and events.
![serverless-framework-logo](/static/blog/serverless-bert-with-huggingface-and-aws-lambda/serverless-logo.png)
If you aren’t familiar or haven’t set up the Serverless Framework, take a look at
this [quick-start with the Serverless Framework](https://serverless.com/framework/docs/providers/aws/guide/quick-start/).
---
## Tutorial
Before we get started, make sure you have the [Serverless Framework](https://serverless.com/) configured and set up. You
also need a working Docker environment. A Docker environment is used to build our own python runtime, which we deploy to
AWS Lambda. Furthermore, you need access to an AWS Account to create an S3 Bucket and the AWS Lambda function.
In the tutorial, we are going to build a Question-Answering API with a pre-trained `BERT` model. The idea is we send a
context (small paragraph) and a question to the lambda function, which will respond with the answer to the question.
As this guide is not about building a model, we will use a pre-built version, that I created using `distilbert`. You can
check the colab notebook [here](https://colab.research.google.com/drive/1eyVi8tkCr7N-sE-yyhDB_lduowp1EZ78?usp=sharing).
```python
context = """We introduce a new language representation model called BERT, which stands for
Bidirectional Encoder Representations from Transformers. Unlike recent language
representation models (Peters et al., 2018a; Radford et al., 2018), BERT is
designed to pretrain deep bidirectional representations from unlabeled text by
jointly conditioning on both left and right context in all layers. As a result,
the pre-trained BERT model can be finetuned with just one additional output
layer to create state-of-the-art models for a wide range of tasks, such as
question answering and language inference, without substantial taskspecific
architecture modifications. BERT is conceptually simple and empirically
powerful. It obtains new state-of-the-art results on eleven natural language
processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute
improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1
question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD
v2.0 Test F1 to 83.1 (5.1 point absolute improvement)."""
question_one = "What is BERTs best score on Squadv2 ?"
# 83 . 1
question_two = "What does the 'B' in BERT stand for?"
# 'bidirectional encoder representations from transformers'
```
Before we start, I want to say that we're not gonna go into detail this time. If you want to understand more about how
to use Deep Learning in AWS Lambda I suggest you check out my other articles:
- [Scaling Machine Learning from ZERO to HERO](https://www.philschmid.de/scaling-machine-learning-from-zero-to-hero)
- [How to Set Up a CI/CD Pipeline for AWS Lambda With GitHub Actions and Serverless](https://www.philschmid.de/how-to-set-up-a-ci-cd-pipeline-for-aws-lambda-with-github-actions-and-serverless)
The architecture we are building will look like this.
![architektur](/static/blog/serverless-bert-with-huggingface-and-aws-lambda/architektur.png)
**What are we going to do:**
- create a Python Lambda function with the Serverless Framework
- create an S3 Bucket and upload our model
- Configure the `serverless.yaml`, add `transformers` as a dependency and set up an API Gateway for inference
- add the `BERT` model from the
[colab notebook](https://colab.research.google.com/drive/1eyVi8tkCr7N-sE-yyhDB_lduowp1EZ78?usp=sharing#scrollTo=pUdW5bwb1qre)
to our function
- deploy & test the function
You can find everything we are doing in this
[GitHub repository](https://github.com/philschmid/serverless-bert-with-huggingface-aws-lambda/blob/master/model/model.py)
and the
[colab notebook](https://colab.research.google.com/drive/1Ehy2Tfadj4XASpMDMuNTqHZsXHtDvTmf#scrollTo=vV9cHcwN0MXw).
---
## Create a Python Lambda function
First, we create our AWS Lambda function by using the Serverless CLI with the `aws-python3` template.
```bash
serverless create --template aws-python3 --path serverless-bert
```
This CLI command will create a new directory containing a `handler.py`, `.gitignore` and `serverless.yaml` file. The
`handler.py` contains some basic boilerplate code.
```python
import json
def hello(event, context):
body = {
"message": "Go Serverless v1.0! Your function executed successfully!",
"input": event
}
response = {
"statusCode": 200,
"body": json.dumps(body)
}
return response
```
---
## Add `transformers` as a dependency
The Serverless Framework created almost anything we need, except for the `requirements.txt`. We create the
`requirements.txt` by hand and add the following dependencies.
```python
https://download.pytorch.org/whl/cpu/torch-1.5.0%2Bcpu-cp38-cp38-linux_x86_64.whl
transformers==2.10
```
---
## Create an S3 Bucket and upload the model
AWS S3 and Pytorch provide a unique way of working with machine learning models which are bigger than 250MB. Why 250 MB?
The size of the Lambda function is limited to 250MB unzipped.
But S3 allows files to be loaded directly from S3 into memory. In our function, we are going to load our model
`squad-distilbert` from S3 into memory and reading it from memory as a buffer in Pytorch.
If you run the
[colab notebook](https://colab.research.google.com/drive/1eyVi8tkCr7N-sE-yyhDB_lduowp1EZ78?usp=sharing#scrollTo=pUdW5bwb1qre)
it will create a file called `squad-distilbert.tar.gz`, which includes our model.
For creating an S3 Bucket you can either create one using the management console or with this command.
```python
aws s3api create-bucket --bucket bucket-name --region eu-central-1 --create-bucket-configuration LocationConstraint=eu-central-1
```
After we created the bucket we can upload our model. You can do it either manually or using the provided python script.
```python
import boto3
def upload_model(model_path='', s3_bucket='', key_prefix='', aws_profile='default'):
s3 = boto3.session.Session(profile_name=aws_profile)
client = s3.client('s3')
client.upload_file(model_path, s3_bucket, key_prefix)
```
---
## Configuring the `serverless.yaml`
This time I provided the complete `serverless.yaml`for us. If you want to know what each section is used for, I suggest
you check out
[Scaling Machine Learning from ZERO to HERO](https://www.philschmid.de/scaling-machine-learning-from-zero-to-hero). In
this article, I went through each configuration and explain the usage of them.
```yaml
service: serverless-bert
provider:
name: aws
runtime: python3.8
region: eu-central-1
timeout: 60
iamRoleStatements:
- Effect: 'Allow'
Action:
- s3:getObject
Resource: arn:aws:s3:::<your-S3-Bucket>/<key_prefix>/*
custom:
pythonRequirements:
dockerizePip: true
zip: true
slim: true
strip: false
noDeploy:
- docutils
- jmespath
- pip
- python-dateutil
- setuptools
- six
- tensorboard
useStaticCache: true
useDownloadCache: true
cacheLocation: './cache'
package:
individually: false
exclude:
- package.json
- package-log.json
- node_modules/**
- cache/**
- test/**
- __pycache__/**
- .pytest_cache/**
- model/pytorch_model.bin
- raw/**
- .vscode/**
- .ipynb_checkpoints/**
functions:
predict_answer:
handler: handler.predict_answer
memorySize: 3008
timeout: 60
events:
- http:
path: ask
method: post
cors: true
plugins:
- serverless-python-requirements
```
---
## Add the `BERT` model from the [colab notebook](https://colab.research.google.com/drive/1eyVi8tkCr7N-sE-yyhDB_lduowp1EZ78?usp=sharing#scrollTo=pUdW5bwb1qre) to our function
A typical `transformers` model consists of a `pytorch_model.bin`, `config.json`, `special_tokens_map.json`,
`tokenizer_config.json`, and `vocab.txt`. The`pytorch_model.bin` has already been extracted and uploaded to S3.
We are going to add `config.json`, `special_tokens_map.json`, `tokenizer_config.json`, and `vocab.txt` directly into our
Lambda function because they are only a few KB in size. Therefore we create a `model` directory in our lambda function.
_If this sounds complicated, check out the
[GitHub repository](https://github.com/philschmid/serverless-bert-with-huggingface-aws-lambda)._
The next step is to create a `model.py` in the `model/` directory that holds our model class `ServerlessModel`.
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, AutoConfig
import torch
import boto3
import os
import tarfile
import io
import base64
import json
import re
s3 = boto3.client('s3')
class ServerlessModel:
def __init__(self, model_path=None, s3_bucket=None, file_prefix=None):
self.model, self.tokenizer = self.from_pretrained(
model_path, s3_bucket, file_prefix)
def from_pretrained(self, model_path: str, s3_bucket: str, file_prefix: str):
model = self.load_model_from_s3(model_path, s3_bucket, file_prefix)
tokenizer = self.load_tokenizer(model_path)
return model, tokenizer
def load_model_from_s3(self, model_path: str, s3_bucket: str, file_prefix: str):
if model_path and s3_bucket and file_prefix:
obj = s3.get_object(Bucket=s3_bucket, Key=file_prefix)
bytestream = io.BytesIO(obj['Body'].read())
tar = tarfile.open(fileobj=bytestream, mode="r:gz")
config = AutoConfig.from_pretrained(f'{model_path}/config.json')
for member in tar.getmembers():
if member.name.endswith(".bin"):
f = tar.extractfile(member)
state = torch.load(io.BytesIO(f.read()))
model = AutoModelForQuestionAnswering.from_pretrained(
pretrained_model_name_or_path=None, state_dict=state, config=config)
return model
else:
raise KeyError('No S3 Bucket and Key Prefix provided')
def load_tokenizer(self, model_path: str):
tokenizer = AutoTokenizer.from_pretrained(model_path)
return tokenizer
def encode(self, question, context):
encoded = self.tokenizer.encode_plus(question, context)
return encoded["input_ids"], encoded["attention_mask"]
def decode(self, token):
answer_tokens = self.tokenizer.convert_ids_to_tokens(
token, skip_special_tokens=True)
return self.tokenizer.convert_tokens_to_string(answer_tokens)
def predict(self, question, context):
input_ids, attention_mask = self.encode(question, context)
start_scores, end_scores = self.model(torch.tensor(
[input_ids]), attention_mask=torch.tensor([attention_mask]))
ans_tokens = input_ids[torch.argmax(
start_scores): torch.argmax(end_scores)+1]
answer = self.decode(ans_tokens)
return answer
```
In the `handler.py` we create an instance of our `ServerlessModel` and can use the `predict` function to get our answer.
```python
try:
import unzip_requirements
except ImportError:
pass
from model.model import ServerlessModel
import json
model = ServerlessModel('./model', <s3_bucket>, <file_prefix>)
def predict_answer(event, context):
try:
body = json.loads(event['body'])
answer = model.predict(body['question'], body['context'])
return {
"statusCode": 200,
"headers": {
'Content-Type': 'application/json',
'Access-Control-Allow-Origin': '*',
"Access-Control-Allow-Credentials": True
},
"body": json.dumps({'answer': answer})
}
except Exception as e:
return {
"statusCode": 500,
"headers": {
'Content-Type': 'application/json',
'Access-Control-Allow-Origin': '*',
"Access-Control-Allow-Credentials": True
},
"body": json.dumps({"error": repr(e)})
}
```
---
## Deploy & Test the function
In order to deploy the function you only have to run `serverless deploy`.
After this process is done we should see something like this.
![serverless-deployment](/static/blog/serverless-bert-with-huggingface-and-aws-lambda/serverless-deployment.png)
---
## Test and Outcome
To test our Lambda function we can use Insomnia, Postman, or any other REST client. Just add a JSON with a `context` and
a `question` to the body of your request. Let´s try it with our example from the colab notebook.
```json
{
"context": "We introduce a new language representation model called BERT, which stands for idirectional Encoder Representations from Transformers. Unlike recent language epresentation models (Peters et al., 2018a; Radford et al., 2018), BERT is designed to pretrain deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be finetuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial taskspecific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).",
"question": "What is BERTs best score on Squadv2 ?"
}
```
![insomnia-request](/static/blog/serverless-bert-with-huggingface-and-aws-lambda/insomnia-request.png)
Our `ServerlessModel` answered our question correctly with `83.1`. Also, you can see the complete request took 319ms
with a lambda execution time of around 530ms. To be honest, this is pretty fast.
The best thing is, our BERT model automatically scales up if there are several incoming requests! It scales up to
thousands of parallel requests without any worries.
If you _rebuild this, you have to be careful that the first request could take a while. First off, the Lambda is
unzipping and installing our dependencies and then downloading the model from S3._
---
Thanks for reading. You can find the
[GitHub repository](https://github.com/philschmid/serverless-bert-with-huggingface-aws-lambda) with the complete code
[here](https://github.com/philschmid/serverless-bert-with-huggingface-aws-lambda) and the colab notebook
[here](https://colab.research.google.com/drive/1Ehy2Tfadj4XASpMDMuNTqHZsXHtDvTmf#scrollTo=vV9cHcwN0MXw).
Thanks for reading. If you have any questions, feel free to contact me or comment this article. You can also connect
with me on [Twitter](https://twitter.com/_philschmid) or
[LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Accelerate Stable Diffusion inference with DeepSpeed-Inference on GPUs | https://www.philschmid.de/stable-diffusion-deepspeed-inference | 2022-11-08 | [
"Diffusion",
"DeepSpeed",
"HuggingFace",
"Optimization"
] | Learn how to optimize Stable Diffusion for GPU inference with a 1-line of code using Hugging Face Diffusers and DeepSpeed. | In this session, you will learn how to optimize Stable Diffusion for Inerence using Hugging Face [🧨 Diffusers library](https://huggingface.co/docs/diffusers/index). and [DeepSpeed-Inference](https://www.deepspeed.ai/tutorials/inference-tutorial/). The session will show you how to apply state-of-the-art optimization techniques using [DeepSpeed-Inference](https://www.deepspeed.ai/tutorials/inference-tutorial/).
This session will focus on single GPU (Ampere Generation) inference for Stable-Diffusion models.
By the end of this session, you will know how to optimize your Hugging Face Stable-Diffusion models using DeepSpeed-Inference. We are going to optimize [CompVis/stable-diffusion-v1-4](https://huggingface.co/CompVis/stable-diffusion-v1-4) for text-to-image generation.
You will learn how to:
1. [Setup Development Environment](#1-setup-development-environment)
2. [Load vanilla Stable Diffusion model and set baseline](#2-load-vanilla-stable-diffusion-model-and-set-baseline)
3. [Optimize Stable Diffusion for GPU using DeepSpeeds `InferenceEngine`](#3-optimize-stable-diffusion-for-gpu-using-deepspeeds-inferenceengine)
4. [Evaluate the performance and speed](#4-evaluate-the-performance-and-speed)
Let's get started! 🚀
_This tutorial was created and run on a g5.xlarge AWS EC2 Instance including an NVIDIA A10G. The tutorial doesn't work on older GPUs, e.g. due to incompatibility of `triton` kernels._
---
## Quick Intro: What is DeepSpeed-Inference
[DeepSpeed-Inference](https://www.deepspeed.ai/tutorials/inference-tutorial/) is an extension of the [DeepSpeed](https://www.deepspeed.ai/) framework focused on inference workloads. [DeepSpeed Inference](https://www.deepspeed.ai/#deepspeed-inference) combines model parallelism technology such as tensor, pipeline-parallelism, with custom optimized cuda kernels.
DeepSpeed provides a seamless inference mode for compatible transformer based models trained using DeepSpeed, Megatron, and HuggingFace. For a list of compatible models please see [here](https://github.com/microsoft/DeepSpeed/blob/master/deepspeed/module_inject/replace_policy.py).
As mentioned DeepSpeed-Inference integrates model-parallelism techniques allowing you to run multi-GPU inference for LLM, like [BLOOM](https://huggingface.co/bigscience/bloom) with 176 billion parameters.
If you want to learn more about DeepSpeed inference:
- [Paper: DeepSpeed Inference: Enabling Efficient Inference of Transformer Models at Unprecedented Scale](https://arxiv.org/pdf/2207.00032.pdf)
- [Blog: Accelerating large-scale model inference and training via system optimizations and compression](https://www.microsoft.com/en-us/research/blog/deepspeed-accelerating-large-scale-model-inference-and-training-via-system-optimizations-and-compression/)
## 1. Setup Development Environment
Our first step is to install Deepspeed, along with PyTorch, Transfromers, Diffusers and some other libraries. Running the following cell will install all the required packages.
_Note: You need a machine with a GPU and a compatible CUDA installed. You can check this by running `nvidia-smi` in your terminal. If your setup is correct, you should get statistics about your GPU._
```python
!pip install torch==1.12.1 --extra-index-url https://download.pytorch.org/whl/cu116 --upgrade
!pip install deepspeed==0.7.4 --upgrade
!pip install diffusers==0.6.0 triton==2.0.0.dev20221005 --upgrade
!pip install transformers[sentencepiece]==4.24.0 accelerate --upgrade
```
Before we start. Let's make sure all packages are installed correctly.
```python
import re
import torch
# check deepspeed installation
report = !python3 -m deepspeed.env_report
r = re.compile('.*ninja.*OKAY.*')
assert any(r.match(line) for line in report) == True, "DeepSpeed Inference not correct installed"
# check cuda and torch version
torch_version, cuda_version = torch.__version__.split("+")
torch_version = ".".join(torch_version.split(".")[:2])
cuda_version = f"{cuda_version[2:4]}.{cuda_version[4:]}"
r = re.compile(f'.*torch.*{torch_version}.*')
assert any(r.match(line) for line in report) == True, "Wrong Torch version"
r = re.compile(f'.*cuda.*{cuda_version}.*')
assert any(r.match(line) for line in report) == True, "Wrong Cuda version"
```
## 2. Load vanilla Stable Diffusion model and set baseline
After we set up our environment, we create a baseline for our model. We use the [CompVis/stable-diffusion-v1-4](https://huggingface.co/CompVis/stable-diffusion-v1-4) checkpoint.
Before we can load our model from the Hugging Face Hub we have to make sure that we accepted the license of [CompVis/stable-diffusion-v1-4](https://huggingface.co/CompVis/stable-diffusion-v1-4) to be able to use it. [CompVis/stable-diffusion-v1-4](https://huggingface.co/CompVis/stable-diffusion-v1-4) is published under the [CreativeML OpenRAIL-M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license). You can accept the license by clicking on the `Agree and access repository` button on the model page at: https://huggingface.co/CompVis/stable-diffusion-v1-4.
![license](/static/blog/stable-diffusion-deepspeed-inference/license.png)
_Note: This will give access to the repository for the logged in user. This user can then be used to generate [HF Tokens](https://huggingface.co/settings/tokens) to load the model programmatically._
Before we can load the model make sure you have a valid [HF Token](https://huggingface.co/settings/token). You can create a token by going to your [Hugging Face Settings](https://huggingface.co/settings/token) and clicking on the `New token` button. Make sure the enviornment has enough diskspace to store the model, ~30GB should be enough.
```python
from diffusers import DiffusionPipeline
import torch
HF_MODEL_ID="CompVis/stable-diffusion-v1-4"
HF_TOKEN="" # your hf token: https://huggingface.co/settings/tokens
assert len(HF_TOKEN) > 0, "Please set HF_TOKEN to your huggingface token. You can find it here: https://huggingface.co/settings/tokens"
# load vanilla pipeline
pipeline = DiffusionPipeline.from_pretrained(HF_MODEL_ID, torch_dtype=torch.float16, revision="fp16",use_auth_token=HF_TOKEN)
# move pipeline to GPU
device = "cuda"
pipeline = pipeline.to(device)
```
We can now test our pipeline and generate an image
```python
pipeline("a photo of an astronaut riding a horse on mars").images[0]
```
![vanilla-example](/static/blog/stable-diffusion-deepspeed-inference/vanilla-sample.png)
The next step is to create a latency baseline we use the `measure_latency` function, which implements a simple python loop to run inference and calculate the avg, mean & p95 latency for our model.
```python
from time import perf_counter
import numpy as np
def measure_latency(pipe, prompt):
latencies = []
# warm up
pipe.set_progress_bar_config(disable=True)
for _ in range(2):
_ = pipe(prompt)
# Timed run
for _ in range(10):
start_time = perf_counter()
_ = pipe(prompt)
latency = perf_counter() - start_time
latencies.append(latency)
# Compute run statistics
time_avg_s = np.mean(latencies)
time_std_s = np.std(latencies)
time_p95_s = np.percentile(latencies,95)
return f"P95 latency (seconds) - {time_p95_s:.2f}; Average latency (seconds) - {time_avg_s:.2f} +\- {time_std_s:.2f};", time_p95_s
```
We are going to use the same prompt as we used in our example.
```python
prompt = "a photo of an astronaut riding a horse on mars"
vanilla_results = measure_latency(pipeline,prompt)
print(f"Vanilla pipeline: {vanilla_results[0]}")
# Vanilla pipeline: P95 latency (seconds) - 4.57; Average latency (seconds) - 4.56 +\- 0.00;
```
Our pipelines achieves latency of `4.57s` on a single GPU. This is a good baseline for our optimization.
## 3. Optimize Stable Diffusion for GPU using DeepSpeeds `InferenceEngine`
The next and most important step is to optimize our pipeline for GPU inference. This will be done using the DeepSpeed `InferenceEngine`. The `InferenceEngine` is initialized using the `init_inference` method. We are going to replace the `models` including the `UNET` and `CLIP` model in our pipeline with DeepSpeed optimized models.
The `init_inference` method expects as parameters atleast:
- `model`: The model to optimize, in our case the whole pipeline.
- `mp_size`: The number of GPUs to use.
- `dtype`: The data type to use.
- `replace_with_kernel_inject`: Whether inject custom kernels.
You can find more information about the `init_inference` method in the [DeepSpeed documentation](https://deepspeed.readthedocs.io/en/latest/inference-init.html) or [thier inference blog](https://www.deepspeed.ai/tutorials/inference-tutorial/).
_Note: You might need to restart your kernel if you are running into a CUDA OOM error._
```python
import torch
import deepspeed
from diffusers import DiffusionPipeline
# Model Repository on huggingface.co
HF_MODEL_ID="CompVis/stable-diffusion-v1-4"
HF_TOKEN="" # your hf token: https://huggingface.co/settings/tokens
assert len(HF_TOKEN) > 0, "Please set HF_TOKEN to your huggingface token. You can find it here: https://huggingface.co/settings/tokens"
# load vanilla pipeline
ds_pipeline = DiffusionPipeline.from_pretrained(HF_MODEL_ID, torch_dtype=torch.float16, revision="fp16",use_auth_token=HF_TOKEN)
# init deepspeed inference engine
deepspeed.init_inference(
model=getattr(ds_pipeline,"model", ds_pipeline), # Transformers models
mp_size=1, # Number of GPU
dtype=torch.float16, # dtype of the weights (fp16)
replace_method="auto", # Lets DS autmatically identify the layer to replace
replace_with_kernel_inject=False, # replace the model with the kernel injector
)
print("DeepSpeed Inference Engine initialized")
```
We can now inspect a model graph to see that the vanilla `UNet2DConditionModel` has been replaced with an `DSUNet`, which includes the `DeepSpeedAttention` and `triton_flash_attn_kernel` module, custom `nn.Module` that is optimized for inference.
```python
DSUNet(
(unet): UNet2DConditionModel(
(conv_in): Conv2d(4, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(time_proj): Timesteps()
(time_embedding): TimestepEmbedding(
(linear_1): Linear(in_features=320, out_features=1280, bias=True)
(act): SiLU()
(linear_2): Linear(in_features=1280, out_features=1280, bias=True)
)
(down_blocks): ModuleList(
(0): CrossAttnDownBlock2D(
(attentions): ModuleList(
(0): SpatialTransformer(
(norm): GroupNorm(32, 320, eps=1e-06, affine=True)
(proj_in): Conv2d(320, 320, kernel_size=(1, 1), stride=(1, 1))
(transformer_blocks): ModuleList(
(0): BasicTransformerBlock(
(attn1): DeepSpeedAttention(
(triton_flash_attn_kernel): triton_flash_attn()
)
```
```python
from deepspeed.ops.transformer.inference.triton_ops import triton_flash_attn
from deepspeed.ops.transformer.inference import DeepSpeedAttention
assert isinstance(ds_pipeline.unet.unet.down_blocks[0].attentions[0].transformer_blocks[0].attn1, DeepSpeedAttention) == True, "Model not sucessfully initalized"
assert isinstance(ds_pipeline.unet.unet.down_blocks[0].attentions[0].transformer_blocks[0].attn1.triton_flash_attn_kernel, triton_flash_attn) == True, "Model not sucessfully initalized"
```
## 4. Evaluate the performance and speed
As the last step, we want to take a detailed look at the performance of our optimized pipelines. Applying optimization techniques, like graph optimizations or mixed-precision, not only impact performance (latency) those also might have an impact on the accuracy of the model. So accelerating your model comes with a trade-off.
Let's test the performance (latency) of our optimized pipeline. We will use the same prompt as for our vanilla model.
```python
prompt = "a photo of an astronaut riding a horse on mars"
ds_results = measure_latency(ds_pipeline,prompt)
print(f"DeepSpeed model: {ds_results[0]}")
# DeepSpeed model: P95 latency (seconds) - 2.68; Average latency (seconds) - 2.67 +\- 0.00;
```
Our Optimized DeepSpeed pipeline achieves latency of `2.68s`. This is a 1.7x improvement over our baseline. Let's take a look at the performance of our optimized pipeline.
```python
ds_pipeline("a photo of an astronaut riding a horse on mars").images[0]
```
![ds-example](/static/blog/stable-diffusion-deepspeed-inference/ds-sample.png)
We managed to accelerate the `CompVis/stable-diffusion-v1-4` pipeline latency from `4.57s` to `2.68s` for generating a `512x512` large image. This results into a 1.7x improvement.
![improvements](/static/blog/stable-diffusion-deepspeed-inference/stable-diffusion-improvements.png)
## Conclusion
We successfully optimized our Stable Diffusion with DeepSpeed-inference and managed to decrease our model latency from `4.57s` to `2.68s` or 1.7x.
Those are good results results thinking of that we only needed to add 1 additional line of code, but applying the optimization was as easy as adding one additional call to `deepspeed.init_inference`.
But I have to say that this isn't a plug-and-play process you can transfer to any Transformers model, task, or dataset. Also, make sure to check if your model is compatible with DeepSpeed-Inference.
If you want to learn more about Stable Diffusion you should check out:
- [Stable Diffusion with 🧨 Diffusers](https://huggingface.co/blog/stable_diffusion)
- [Stable Diffusion on Amazon SageMaker](https://www.philschmid.de/sagemaker-stable-diffusion)
- [Stable Diffusion Image Generation under 1 second w. DeepSpeed MII](https://github.com/microsoft/DeepSpeed-MII/tree/main/examples/benchmark/txt2img)
---
Thanks for reading! If you have any questions, feel free to contact me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Advanced PII detection and anonymization with Hugging Face Transformers and Amazon SageMaker | https://www.philschmid.de/pii-huggingface-sagemaker | 2022-05-31 | [
"BERT",
"PII",
"HuggingFace",
"SageMaker"
] | Learn how to do advanced PII detection and anonymization with Hugging Face Transformers and Amazon SageMaker. | repository [philschmid/advanced-pii-huggingface-sagemaker](https://github.com/philschmid/advanced-pii-huggingface-sagemaker)
PII or Personally identifiable information (PII) is any data that could potentially identify a specific individual, e.g. to distinguish one person from another. Below are a few examples of PII:
- Name
- Address
- Date of birth
- Telephone number
- Credit Card number
Protecting PII is essential for personal privacy, data privacy, data protection, information privacy, and information security. With just a few bits of an individual's personal information, thieves can create false accounts in the person's name, incur debt, create a falsified passport or sell a person's identity to a criminal.
Transformer models are changing the world of machine learning, starting with natural language processing (NLP), and now, with audio and computer vision. Hugging Face’s mission is to democratize good machine learning and give anyone the opportunity to use these new state-of-the-art machine learning models.
Models Like BERT, RoBERTa, T5, and GPT-2 captured the NLP space and are achieving state-of-the-art results across almost any NLP tasks including, text-classification, question-answering, and token-classification.
---
In this blog, you will learn how to use state-of-the-art Transformers models to recognize, detect and anonymize PII using Hugging Face Transformers, Presidio & Amazon SageMaker.
### What is Presidio?
_Presidio (Origin from Latin praesidium ‘protection, garrison’) helps to ensure sensitive data is properly managed and governed. It provides fast identification and anonymization modules for private entities in text and images such as credit card numbers, names, locations, social security numbers, bitcoin wallets, US phone numbers, financial data and more._ - [Documentation](https://microsoft.github.io/presidio/).
![presidio-gif](/static/blog/pii-huggingface-sagemaker/presidio.gif)
_- From Presidio [Documentation](https://microsoft.github.io/presidio/)_
By Default Presidio is using `Spacy` for PII identification and extraction. In this example are we going to replace `spacy` with a Hugging Face Transformer to perform PII detection and anonymization.
Presidio supports already out of the box [24 PII entities including](https://microsoft.github.io/presidio/supported_entities/), CREDIT_CARD, IBAN_CODE, EMAIL_ADDRESS, US_BANK_NUMBER, US_ITIN...
We are going to extend this available 24 entities with transformers to include LOCATION, PERSON & ORGANIZATION. But it is possible to use any "entity" extracted by the transformers model.
You will learn how to:
1. Setup Environment and Permissions
2. Create a new `transformers` based EntityRecognizer
3. Create a custom `inference.py` including the `EntityRecognizer`
4. Deploy the PII service to Amazon SageMaker
5. Request and customization of requests
Let's get started! 🚀
---
_If you are going to use Sagemaker in a local environment (not SageMaker Studio or Notebook Instances). You need access to an IAM Role with the required permissions for Sagemaker. You can find [here](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) more about it._
## 1. Setup Environment and Permissions
_*Note:* we only install the required libraries from Hugging Face and AWS. You also need PyTorch or Tensorflow, if you haven´t it installed_
```python
%pip install sagemaker --upgrade
import sagemaker
assert sagemaker.__version__ >= "2.75.0"
```
Install `git` and `git-lfs`
```python
# For notebook instances (Amazon Linux)
!sudo yum update -y
!curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.rpm.sh | sudo bash
!sudo yum install git-lfs git -y
# For other environments (Ubuntu)
!sudo apt-get update -y
!curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash
!sudo apt-get install git-lfs git -y
```
### Permissions
_If you are going to use Sagemaker in a local environment (not SageMaker Studio or Notebook Instances). You need access to an IAM Role with the required permissions for Sagemaker. You can find [here](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) more about it._
```python
import sagemaker
import boto3
sess = sagemaker.Session()
# sagemaker session bucket -> used for uploading data, models and logs
# sagemaker will automatically create this bucket if it not exists
sagemaker_session_bucket=None
if sagemaker_session_bucket is None and sess is not None:
# set to default bucket if a bucket name is not given
sagemaker_session_bucket = sess.default_bucket()
try:
role = sagemaker.get_execution_role()
except ValueError:
iam = boto3.client('iam')
role = iam.get_role(RoleName='sagemaker_execution_role')['Role']['Arn']
sess = sagemaker.Session(default_bucket=sagemaker_session_bucket)
print(f"sagemaker role arn: {role}")
print(f"sagemaker bucket: {sess.default_bucket()}")
print(f"sagemaker session region: {sess.boto_region_name}")
```
## 2. Create a new `transformers` based EntityRecognizer
Presidio can be extended to support the detection of new types of PII entities and to support additional languages. These PII recognizers could be added **via code** or **ad-hoc as part of the request**.
- The `EntityRecognizer` is an abstract class for all recognizers.
- The `RemoteRecognizer` is an abstract class for calling external PII detectors. See more info [here](https://microsoft.github.io/presidio/analyzer/adding_recognizers/#creating-a-remote-recognizer).
- The abstract class `LocalRecognizer` is implemented by all recognizers running within the Presidio-analyzer process.
- The `PatternRecognizer` is a class for supporting regex and deny-list-based recognition logic, including validation (e.g., with checksum) and context support. See an example [here](https://microsoft.github.io/presidio/analyzer/adding_recognizers/#simple-example).
For simple recognizers based on regular expressions or deny-lists, we can leverage the provided `PatternRecognizer`:
```python
from presidio_analyzer import PatternRecognizer
titles_recognizer = PatternRecognizer(supported_entity="TITLE",
deny_list=["Mr.","Mrs.","Miss"])
```
To create a Hugging Face Transformer recognizer you have to create a new class deriving the `EntityRecognizer` and implementing a `load` and `analyze` method.
For this example the `__init__` method will be used to "load" and our model using the `transformers.pipeline` for `token-classification`.
If you want to learn more how you can customize/create recognizer you can check out the [documentation](https://microsoft.github.io/presidio/analyzer/adding_recognizers/#extending-the-analyzer-for-additional-pii-entities).
```python
class TransformersRecognizer(EntityRecognizer):
def __init__(self,model_id_or_path=None,aggregation_strategy="average",supported_language="en",ignore_labels=["O","MISC"]):
# inits transformers pipeline for given mode or path
self.pipeline = pipeline("token-classification",model=model_id_or_path,aggregation_strategy="average",ignore_labels=ignore_labels)
# map labels to presidio labels
self.label2presidio={
"PER": "PERSON",
"LOC": "LOCATION",
"ORG": "ORGANIZATION",
}
# passes entities from model into parent class
super().__init__(supported_entities=list(self.label2presidio.values()),supported_language=supported_language)
def load(self) -> None:
"""No loading is required."""
pass
def analyze(
self, text: str, entities: List[str]=None, nlp_artifacts: NlpArtifacts=None
) -> List[RecognizerResult]:
"""
Extracts entities using Transformers pipeline
"""
results = []
# keep max sequence length in mind
predicted_entities = self.pipeline(text)
if len(predicted_entities) >0:
for e in predicted_entities:
converted_entity = self.label2presidio[e["entity_group"]]
if converted_entity in entities or entities is None:
results.append(
RecognizerResult(
entity_type=converted_entity,
start=e["start"],
end=e["end"],
score=e["score"]
)
)
return results
```
## 3. Create a custom `inference.py` including the `EntityRecognizer`
To use the custom inference script, you need to create an `inference.py` script. In this example, we are going to overwrite the `model_fn` to load our `HFTransformersRecognizer` correctly and the `predict_fn` to run the PII analysis.
Additionally we need to provide a `requirements.txt` in the `code/` directory to install `presidio` and other required dependencies
```python
!mkdir code
```
create `inference.py`
```python
%%writefile code/inference.py
from presidio_anonymizer import AnonymizerEngine
from presidio_analyzer import AnalyzerEngine
from typing import List
from presidio_analyzer import AnalyzerEngine, EntityRecognizer, RecognizerResult
from presidio_analyzer.nlp_engine import NlpArtifacts
from transformers import pipeline
# load spacy model -> workaround
import os
os.system("spacy download en_core_web_lg")
# list of entities: https://microsoft.github.io/presidio/supported_entities/#list-of-supported-entities
DEFAULT_ANOYNM_ENTITIES = [
"CREDIT_CARD",
"CRYPTO",
"DATE_TIME",
"EMAIL_ADDRESS",
"IBAN_CODE",
"IP_ADDRESS",
"NRP",
"LOCATION",
"PERSON",
"PHONE_NUMBER",
"MEDICAL_LICENSE",
"URL",
"ORGANIZATION"
]
# init anonymize engine
engine = AnonymizerEngine()
class HFTransformersRecognizer(EntityRecognizer):
def __init__(
self,
model_id_or_path=None,
aggregation_strategy="simple",
supported_language="en",
ignore_labels=["O", "MISC"],
):
# inits transformers pipeline for given mode or path
self.pipeline = pipeline(
"token-classification", model=model_id_or_path, aggregation_strategy=aggregation_strategy, ignore_labels=ignore_labels
)
# map labels to presidio labels
self.label2presidio = {
"PER": "PERSON",
"LOC": "LOCATION",
"ORG": "ORGANIZATION",
}
# passes entities from model into parent class
super().__init__(supported_entities=list(self.label2presidio.values()), supported_language=supported_language)
def load(self) -> None:
"""No loading is required."""
pass
def analyze(
self, text: str, entities: List[str] = None, nlp_artifacts: NlpArtifacts = None
) -> List[RecognizerResult]:
"""
Extracts entities using Transformers pipeline
"""
results = []
# keep max sequence length in mind
predicted_entities = self.pipeline(text)
if len(predicted_entities) > 0:
for e in predicted_entities:
converted_entity = self.label2presidio[e["entity_group"]]
if converted_entity in entities or entities is None:
results.append(
RecognizerResult(
entity_type=converted_entity, start=e["start"], end=e["end"], score=e["score"]
)
)
return results
def model_fn(model_dir):
transformers_recognizer = HFTransformersRecognizer(model_dir)
# Set up the engine, loads the NLP module (spaCy model by default) and other PII recognizers
analyzer = AnalyzerEngine()
analyzer.registry.add_recognizer(transformers_recognizer)
return analyzer
def predict_fn(data, analyzer):
sentences = data.pop("inputs", data)
if "parameters" in data:
anonymization_entities = data["parameters"].get("entities", DEFAULT_ANOYNM_ENTITIES)
anonymize_text = data["parameters"].get("anonymize", False)
else:
anonymization_entities = DEFAULT_ANOYNM_ENTITIES
anonymize_text = False
# identify entities
results = analyzer.analyze(text=sentences, entities=anonymization_entities, language="en")
# anonymize text
if anonymize_text:
result = engine.anonymize(text=sentences, analyzer_results=results)
return {"anonymized": result.text}
return {"found": [entity.to_dict() for entity in results]}
```
create `requirements.txt`
```python
%%writefile code/requirements.txt
presidio-analyzer
spacy
transformers
presidio-anonymizer
```
## 4. Deploy the PII service to Amazon SageMaker
Before you can deploy a t he PII service to Amazon SageMaker you need to create `model.tar.gz` with inference script and model.
You need to bundle the `inference.py` and all model-artifcats, e.g. `pytorch_model.bin` into a `model.tar.gz`. The `inference.py` script will be placed into a `code/` folder. We will use `git` and `git-lfs` to easily download our model from hf.co/models and upload it to Amazon S3 so we can use it when creating our SageMaker endpoint.
As the base model for the recognizer the example will use [Jean-Baptiste/roberta-large-ner-english](https://huggingface.co/Jean-Baptiste/roberta-large-ner-english)
```python
repository = "Jean-Baptiste/roberta-large-ner-english"
model_id=repository.split("/")[-1]
s3_location=f"s3://{sess.default_bucket()}/custom_inference/{model_id}/model.tar.gz"
```
1. Download the model from hf.co/models with `git clone`.
```python
!git lfs install
!git clone https://huggingface.co/$repository
```
2. copy `inference.py` into the `code/` directory of the model directory.
```python
!cp -r code/ $model_id/code/
```
3. Create a `model.tar.gz` archive with all the model artifacts and the `inference.py` script.
```python
%cd $model_id
!tar zcvf model.tar.gz *
```
4. Upload the `model.tar.gz` to Amazon S3:
```python
!aws s3 cp model.tar.gz $s3_location
```
After you uploaded the `model.tar.gz` archive to Amazon S3. You can create a custom `HuggingfaceModel` class. This class will be used to create and deploy our SageMaker endpoint.
```python
from sagemaker.huggingface.model import HuggingFaceModel
# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
model_data=s3_location, # path to your model and script
role=role, # iam role with permissions to create an Endpoint
transformers_version="4.17", # transformers version used
pytorch_version="1.10", # pytorch version used
py_version='py38', # python version used
)
# deploy the endpoint endpoint
predictor = huggingface_model.deploy(
initial_instance_count=1,
instance_type="ml.g4dn.xlarge"
)
```
## 5. Request and customization of requests
The `.deploy()` returns an `HuggingFacePredictor` object which can be used to request inference.
```python
payload="""
Hello, my name is David Johnson and I live in Maine.
I work as a software engineer at Amazon.
You can call me at (123) 456-7890.
My credit card number is 4095-2609-9393-4932 and my crypto wallet id is 16Yeky6GMjeNkAiNcBY7ZhrLoMSgg1BoyZ.
On September 18 I visited microsoft.com and sent an email to test@presidio.site, from the IP 192.168.0.1.
My passport: 191280342 and my phone number: (212) 555-1234.
This is a valid International Bank Account Number: IL150120690000003111111. Can you please check the status on bank account 954567876544?
Kate's social security number is 078-05-1126. Her driver license? it is 1234567A.
"""
```
**Simple detection request**
```python
data = {
"inputs": payload,
}
res = predictor.predict(data=data)
print(res)
# {'found': [{'entity_type': 'CREDIT_CARD', 'start': 120, 'end': 139, 'score': 1.0, 'analysis_explanation': None,....
```
**Detect only specific PII entities**
```python
data = {
"inputs": payload,
"parameters": {
"entities":["PERSON","LOCATION","ORGANIZATION"]
}
}
res = predictor.predict(data=data)
print(res)
```
**Anonzymizing PII entities**
```python
data = {
"inputs": payload,
"parameters": {
"anonymize": True,
}
}
res = predictor.predict(data=data)
print(res["anonymized"])
```
> Hello, my name is \<PERSON\> and I live in \<LOCATION\>.
> I work as a software engineer at \<ORGANIZATION\>.
> You can call me at \<PHONE_NUMBER\>.
> My credit card number is \<CREDIT_CARD\> and my crypto wallet id is \<CRYPTO\>.
>
> On \<DATE_TIME\> I visited \<URL\> and sent an email to \<EMAIL_ADDRESS\>, from the IP \<IP_ADDRESS\>.
> My passport: 191280342 and my phone number: \<PHONE_NUMBER\>.
> This is a valid International Bank Account Number: \<IBAN_CODE>. Can you please check the status on bank account 954567876544?
> \<PERSON\>'s social security number is \<PHONE_NUMBER\>. Her driver license? it is 1234567A.
**Anonzymizing only specific PII entities**
```python
data = {
"inputs": payload,
"parameters": {
"anonymize": True,
"entities":["PERSON","LOCATION"]
}
}
res = predictor.predict(data=data)
print(res["anonymized"])
```
### Delete model and endpoint
To clean up, we can delete the model and endpoint.
```python
predictor.delete_model()
predictor.delete_endpoint()
```
## 6. Conclusion
We successfully create our Transformers-based PII detection and anonymization with Hugging Face Transformers and Amazon SageMaker.
The service can either detect or directly anonymize the payload we send to the endpoint. The service is built on top of open-source libraries including `transformers` and `presidio` to keep full control of how detections and anonymization are done.
This is a huge benefit compared to services like Amazon Comprehend, which are non-customizable intransparent black-box solutions.
This solution can easily be extended and improved by improving the transformers model used, e.g. to identify job titles like “software engineer” or add a new pattern recognizer, e.g. german personal number.
The code can be found in this repository [philschmid/advanced-pii-huggingface-sagemaker](https://github.com/philschmid/advanced-pii-huggingface-sagemaker)
---
Thanks for reading! If you have any questions, feel free to contact me, through [Github](https://github.com/huggingface/transformers), or on the [forum](https://discuss.huggingface.co/c/sagemaker/17). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Getting started with CNNs by calculating LeNet-Layer manually | https://www.philschmid.de/getting-started-with-cnn-by-calculating-lenet-layer-manually | 2020-02-28 | [
"Computer Vision"
] | Getting started explanation to CNNs by calculating Yann LeCun LeNet-5 manually for handwritten digits and learning about Padding and Stride. | The idea of CNNs is intelligently adapt to the properties of images by reducing the dimension. To achieve this
convolutional layer and pooling layer are used. Convolutional layers are reducing the dimensions by adding filters
(kernel windows) to the Input. The dimension can reduce by applying kernel windows to calculate new outputs. Assuming
the input shape is $$n_{h} x n_{w}$$ and the kernel window ist $$k_{h} x k_{w}$$ then the output shape will be.
$$
(n_{h} - k _{h}+1 )\ x\ (n_{w} - k_{w}+1 )
$$
Pooling Layers are reducing the dimension by aggregating the input elements. Assuming the input shape is
$$n_{h} x n_{w}$$ and the pooling method is average with a kernel window of $$k_{h} x k_{w}$$ then the output shape will
be
$$
(n - k +p+s)/s
$$
The Explanation for $$p$$ and $$s$$ will follow in the section of Stride and Padding.
### Example CNNs Architecture LeNet-5
![lenet-5-architecture](/static/blog/getting-started-with-cnn-by-calculating-lenet-layer-manually/lenet-5.svg) This is the architecture of LeNet-5 created by **Yann LeCun** in 1998 and
widely used for written digits recognition (MNIST).
To understand what is happening in each layer we have to clarify a few basics. Let’s start with Stride and Padding
## Stride and Padding
As described in the introduction the goal of a CNNs is to reduce the dimension by applying a layer. A tricky part of
reducing dimensions is not to erase peaces of information from the original input, for example, if you have an input of
100 x 100 and apply 5 layer of 5 x 5 you reduce the size of dimension to 80 x 80 or you erase 20% in 5 layers. This is
where Stride and Padding can be helpful.
$$
(100_{h} - 5_{h}+1 )\ x\ (100_{w} - 5_{w}+1 ) = 95\newline repeat\ it\ 5\ times
$$
### Padding
You can define padding as adding extra pixels as filler around the original input to decrease the erasion of
information.
![example-of-padding-1x1](/static/blog/getting-started-with-cnn-by-calculating-lenet-layer-manually/example-of-padding-1x1.svg) Example of adding p (1x1) to an input. If we add padding
to our input the formula for calculating the output changes to
$$
(n_{h} - k _{h}+p_{h}+1 )\ x \ n_{w} - k_{w}+p_{w}+1 )
$$
if we now add a 1x1 padding to our 100 x 100 input example the reduction of the dimension changes to 85 x 85.
### Stride
When calculating inputs with kernel window you start at the top-left corner of the input and then slide it overall
locations from left to right and top to bottom. The default behavior is sliding by one at a time. The problem of sliding
by one can sometimes result in computational inefficency for example, if you have a 4k input image you don’t want to
calculate and slide by one. To optimize this we can slide by more than one to downsample our output. This sliding is
called _stride_.
![example-of-stride-1x1](/static/blog/getting-started-with-cnn-by-calculating-lenet-layer-manually/example-of-stride-1x1.svg)
![example-of-stride-2x2](/static/blog/getting-started-with-cnn-by-calculating-lenet-layer-manually/example-of-stride-2x2.svg) If we add stride to our input the formula for calculating
the output changes to
$$
(n_{h} - k _{h}+p_{h}+s_{h})/s_{h}\ x \ n_{w} - k_{w}+p_{w}+/s_{w} )/s_{w}
$$
if we now add a 2x2 stride to our 100 x 100 input example with padding and apply only 1 layer the reduction of the
dimension changes to 49 x 49. _If you have stride of 0 or None just means having a stride of 1._
## Pooling Layer
Pooling layers are used to reduce the dimension of input tensors by aggregating information. Pooling layer don’t have
parameter like Convolutional layer do. Pooling operators are deterministic (all events - especially future ones - are
clearly defined by preconditions). Typically Pooling layers are calculating either the maximum (Max Pooling Layer) or
the average (Average Pooling Layer) value of the elements in the pooling window. A Pooling window is normally a 2x2
cutout from your input. The size of the pooling window can be tuned, but in most cases, it is 2x2 or 2x2x2.
—_Pooling with a pooling window of XxY is called 2D Pooling and with a pooling window of XxYxZ it is called 3d Pooling._
—
_X x Y => X is vertidal and Y is horizontal_ ![pooling-window](/static/blog/getting-started-with-cnn-by-calculating-lenet-layer-manually/pooling-window.svg)
### Average 2D Pooling Layer
Average Pooling is calculating the mean of the given input. It is calculating the Sum of all elements and dividing it by
the size of the pooling window. ![average-pooling-layer](/static/blog/getting-started-with-cnn-by-calculating-lenet-layer-manually/average-pooling-layer.svg)
### Max 2D Pooling Layer
Max Pooling is extracting the maximum number of the given input. It is comparing each element and extracting the element
with the highest value. ![max-pooling-layer](/static/blog/getting-started-with-cnn-by-calculating-lenet-layer-manually/max-pooling-layer.svg)
## Fully-Connected / Dense Layer
A Fully-Connected / Dense Layer represents a matrix-vector multiplication, where each input Neuron is connected to the
output Neuron by a weight. A dense layer is used to change the dimensions of your input. Mathematically speaking, it
applies a rotation, scaling, translation transform to your vector.
Dense Layer are calculated same as linear layers $$wx+b$$, but the end result is passed through **Activation function**.
$$
((current\ layer\ n *previous\ layer\ n(X\ x\ X\ x\ X))+b
$$
## Calculating CNN-Layers in LeNet-5
For Calculating the CNN-Layers we are using the formula from Yann LeCun
[LeNet-5 Paper](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf)
$$
(n_{h} +2p_{h}-f_{h})/ s_{h} +1\ x\ (n_{w} +2p_{w} -f_{w})/ s_{w} +1\ x\ Nc
$$
### Variable definiton
$$n=dimension\ of\ input-tensor$$
$$p=padding\ (32x32\ by\ p=1\ \rightarrow\ 34x34)$$
$$f= filter\ size$$
$$Nc = number\ of\ filters $$
The LeNet-5 was trained with Images of the Size of 32x32x1. The first Layer are 6 5x5 filters applied with a stride
of 1. This results in the following variables:
### Calculating first layer
Variables are defined like:
$$n=32$$
$$p=0$$
$$f=5$$
$$s=1$$
$$Nc=6$$
Add Variables to the formula:
$$
(32+(2*0)-5)/1+1\ x\ (32+(2*0)-5)/1+1\ x\ 6\ ==\ 28\ x\ 28\ x\ 6
$$
## Calculating Pooling-Layers in LeNet-5
The LeNet-5 is using average pooling back then when [this paper](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf)
was published, people used average pooling much more than max pooling.
$$
(n - k +p+s)/s\ x\ (n - k +p+s)/s\ x\ Nc
$$
### Variable definition
$$n=dimension\ of\ input-tensor$$
$$k=pooling\ window\ size$$
$$p=padding\ (32x32\ by\ p=1\ \rightarrow\ 34x34)$$
$$s=stride$$
### Calculating first layer
Variables are defined like:
$$n=28$$
$$k=2$$
$$p=0$$
$$s=2$$
$$Nc=6$$
Add Variables to the formula:
$$
(28- 2 +0+2)/2\ x\ (28 - 2 +0+2)/2\ x\ 6 == \ 14\ x\ 14\ x\ 6
$$
The calculation can now be done analogously for the remaining layers until the last layer before you would reached 1x1xX
output-layer. Afterward you use FC-Layers and softmax for your classification.
---
Refrence:
[vdumoulin/conv_arithmetic · GitHub](https://github.com/vdumoulin/conv_arithmetic/blob/master/README.md) |
Convert Transformers to ONNX with Hugging Face Optimum | https://www.philschmid.de/convert-transformers-to-onnx | 2022-06-21 | [
"BERT",
"HuggingFace",
"ONNX",
"Optimum"
] | Introduction guide about ONNX and Transformers. Learn how to convert transformers like BERT to ONNX and what you can do with it. | Hundreds of Transformers experiments and models are uploaded to the [Hugging Face Hub](https://huggingface.co/) every single day. Machine learning engineers and students conducting those experiments use a variety of frameworks like PyTorch, TensorFlow/Keras, or others. These models are already used by thousands of companies and form the foundation of AI-powered products.
If you deploy Transformers models in production environments, we recommend exporting them first into a serialized format that can be loaded, optimized, and executed on specialized runtimes and hardware.
In this guide, you'll learn about:
1. [What is ONNX?](#1-what-is-onnx)
2. [What is Hugging Face Optimum?](#2-what-is-hugging-face-optimum)
3. [What Transformers architectures are supported?](#3-what-transformers-architectures-are-supported)
4. [How can I convert a Transformers model (BERT) to ONNX?](#4-how-can-i-convert-a-transformers-model-bert-to-onnx)
5. [What's next?](#5-whats-next)
Let's get started! 🚀
---
If you are interested in optimizing your models to run with maximum efficiency, check out the [🤗 Optimum library](https://github.com/huggingface/optimum).
## 1. What is ONNX?
The [ONNX (Open Neural Network eXchange)](http://onnx.ai/) is an open standard and format to represent machine learning models. ONNX defines a common set of operators and a common file format to represent deep learning models in a wide variety of frameworks, including PyTorch and TensorFlow.
![graph](/static/blog/convert-transformers-to-onnx/graph.png)
Pseudo ONNX Graph. Visualized with [Netron](https://github.com/lutzroeder/Netron)
When a model is exported to the ONNX format, these operators are used to construct a computational graph (often called an `intermediate representation`) which represents the flow of data through the neural network.
> **Important:** ONNX Is not a Runtime ONNX is only the representation that can be used with runtimes like ONNX Runtime. You can find a list of supported accelerators [here](https://onnx.ai/supported-tools.html#deployModel).
➡️[Learn more about ONNX.](https://onnx.ai/about.html)
## 2. What is Hugging Face Optimum?
[Hugging Face Optimum](https://github.com/huggingface/optimum) is an open-source library and an extension of [Hugging Face Transformers](https://github.com/huggingface/transformers), that provides a unified API of performance optimization tools to achieve maximum efficiency to train and run models on accelerated hardware, including toolkits for optimized performance on [Graphcore IPU](https://github.com/huggingface/optimum-graphcore) and [Habana Gaudi](https://github.com/huggingface/optimum-habana).
Optimum can be used for converting, quantization, graph optimization, accelerated training & inference with support for [transformers pipelines](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#pipelines).
Below you can see a typical customer journey of how you can leverage Optimum with ONNX.
![user-journey.png](/static/blog/convert-transformers-to-onnx/user-journey.png)
[➡️ Learn more about Optimum](https://huggingface.co/blog/hardware-partners-program)
## 3. What Transformers architectures are supported?
A list of all supported Transformers architectures can be found in the [ONNX section of the Transformers documentation](https://huggingface.co/docs/transformers/serialization#onnx). Below is an excerpt of the most commonly used architectures which can be converted to ONNX and optimized with [Hugging Face Optimum](https://huggingface.co/docs/optimum/index)
- ALBERT
- BART
- BERT
- DistilBERT
- ELECTRA
- GPT Neo
- GPT-J
- GPT-2
- RoBERT
- T5
- ViT
- XLM
- …
[➡️ All supported architectures](https://huggingface.co/docs/transformers/serialization#onnx)
## 4. How can I convert a Transformers model (BERT) to ONNX?
There are currently three ways to convert your Hugging Face Transformers models to ONNX. In this section, you will learn how to export [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) for `text-classification` using all three methods going from the low-level `torch` API to the most user-friendly high-level API of `optimum`. Each method will do exactly the same
### Export with `torch.onnx` (low-level)
[torch.onnx](https://pytorch.org/docs/stable/onnx.html) enables you to convert model checkpoints to an ONNX graph by the `export` method. But you have to provide a lot of values like `input_names`, `dynamic_axes`, etc.
You’ll first need to install some dependencies:
```python
pip install transformers torch
```
exporting our checkpoint with `export`
```python
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
# load model and tokenizer
model_id = "distilbert-base-uncased-finetuned-sst-2-english"
model = AutoModelForSequenceClassification.from_pretrained(model_id)
tokenizer = AutoTokenizer.from_pretrained(model_id)
dummy_model_input = tokenizer("This is a sample", return_tensors="pt")
# export
torch.onnx.export(
model,
tuple(dummy_model_input.values()),
f="torch-model.onnx",
input_names=['input_ids', 'attention_mask'],
output_names=['logits'],
dynamic_axes={'input_ids': {0: 'batch_size', 1: 'sequence'},
'attention_mask': {0: 'batch_size', 1: 'sequence'},
'logits': {0: 'batch_size', 1: 'sequence'}},
do_constant_folding=True,
opset_version=13,
)
```
### Export with `transformers.onnx` (mid-level)
[transformers.onnx](https://huggingface.co/docs/transformers/serialization#exporting-a-model-to-onnx) enables you to convert model checkpoints to an ONNX graph by leveraging configuration objects. That way you don’t have to provide the complex configuration for `dynamic_axes` etc.
You’ll first need to install some dependencies:
```python
pip install transformers[onnx] torch
```
Exporting our checkpoint with the `transformers.onnx`.
```python
from pathlib import Path
import transformers
from transformers.onnx import FeaturesManager
from transformers import AutoConfig, AutoTokenizer, AutoModelForSequenceClassification
# load model and tokenizer
model_id = "distilbert-base-uncased-finetuned-sst-2-english"
feature = "sequence-classification"
base_model = AutoModelForSequenceClassification.from_pretrained(model_id)
tokenizer = AutoTokenizer.from_pretrained(model_id)
# load config
model_kind, model_onnx_config = FeaturesManager.check_supported_model_or_raise(model, feature=feature)
onnx_config = model_onnx_config(model.config)
# export
onnx_inputs, onnx_outputs = transformers.onnx.export(
preprocessor=tokenizer,
model=model,
config=onnx_config,
opset=13,
output=Path("trfs-model.onnx")
)
```
### Export with Optimum (high-level)
[Optimum](https://huggingface.co/docs/optimum/onnxruntime/modeling_ort#switching-from-transformers-to-optimum-inference) Inference includes methods to convert vanilla Transformers models to ONNX using the `ORTModelForXxx` classes. To convert your Transformers model to ONNX you simply have to pass `from_transformers=True` to the `from_pretrained()` method and your model will be loaded and converted to ONNX leveraging the [transformers.onnx](https://huggingface.co/docs/transformers/serialization#exporting-a-model-to-onnx) package under the hood.
You’ll first need to install some dependencies:
```python
pip install optimum[onnxruntime]
```
Exporting our checkpoint with `ORTModelForSequenceClassification`
```python
from optimum.onnxruntime import ORTModelForSequenceClassification
model = ORTModelForSequenceClassification.from_pretrained("distilbert-base-uncased-finetuned-sst-2-english",from_transformers=True)
```
The best part about the conversion with Optimum is that you can immediately use the `model` to run predictions or load it [inside a pipeline.](https://huggingface.co/docs/optimum/onnxruntime/modeling_ort#switching-from-transformers-to-optimum-inference)
## 5. What's next?
Since you successfully convert your Transformers model to ONNX the whole set of optimization and quantization tools is now open to use. Potential next steps can be:
- Use the onnx model for [Accelerated Inference with Optimum and Transformers Pipelines](https://huggingface.co/blog/optimum-inference)
- Apply [static quantization to your model](https://www.philschmid.de/static-quantization-optimum) for ~3x latency improvements
- Use ONNX runtime for [training](https://github.com/huggingface/optimum/tree/main/examples/onnxruntime/training)
- Convert your ONNX model to [TensorRT](https://docs.nvidia.com/deeplearning/tensorrt/api/python_api/) to improve GPU performance
- …
If you are interested in optimizing your models to run with maximum efficiency, check out the [🤗 Optimum library](https://github.com/huggingface/optimum).
---
Thanks for reading! If you have any questions, feel free to contact me, through [Github](https://github.com/huggingface/transformers), or on the [forum](https://discuss.huggingface.co/c/optimum/59). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Controlled text-to-image generation with ControlNet on Inference Endpoints | https://www.philschmid.de/stable-diffusion-controlnet-endpoint | 2023-03-03 | [
"Diffusion",
"Inference",
"HuggingFace",
"Generation"
] | Learn how to deploy ControlNet Stable Diffusion Pipeline on Hugging Face Inference Endpoints to generate controlled images. | ControlNet is a neural network structure to control diffusion models by adding extra conditions.
With [ControlNet](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/controlnet), users can easily condition the generation with different spatial contexts such as a depth map, a segmentation map, a scribble, keypoints, and so on!
We can turn a cartoon drawing into a realistic photo with incredible coherence.
![example](/static/blog/stable-diffusion-controlnet-endpoint/example.jpg)
Suppose you are now as impressed as I am. In that case, you are probably asking yourself: “ok, how can I integrate ControlNet into my applications in a scalable, reliable, and secure way? How can I use it as an API?”.
That's where Hugging Face Inference Endpoints can help you! [🤗 Inference Endpoints](https://huggingface.co/inference-endpoints) offers a secure production solution to easily deploy Machine Learning models on dedicated and autoscaling infrastructure managed by Hugging Face.
This blog post will teach you how to create ControlNet pipelines with Inference Endpoints using the [custom handler](https://huggingface.co/docs/inference-endpoints/guides/custom_handler) feature. [Custom handlers](https://huggingface.co/docs/inference-endpoints/guides/custom_handler) allow users to modify, customize and extend the inference step of your model.
![architecture](/static/blog/stable-diffusion-controlnet-endpoint/architecture.png)
Before we can get started, make sure you meet all of the following requirements:
1. An Organization/User with an active credit card. (Add billing [here](https://huggingface.co/settings/billing))
2. You can access the UI at: [https://ui.endpoints.huggingface.co](https://ui.endpoints.huggingface.co/endpoints)
The Tutorial will cover how to:
1. [Create ControlNet Inference Handler](#1-create-controlnet-inference-handler)
2. [Deploy Stable Diffusion ControlNet pipeline as Inference Endpoint](#2-deploy-stable-diffusion-controlnet-pipeline-as-inference-endpoint)
3. [Integrate ControlNet as API and send HTTP requests using Python](#3-integrate-controlnet-as-api-and-send-http-requests-using-python)
### TL;DR;
You can directly hit “deploy” on this repository to get started: [https://huggingface.co/philschmid/ControlNet-endpoint](https://huggingface.co/philschmid/ControlNet-endpoint)
## 1. Create ControlNet Inference Handler
This tutorial is not covering how you create the custom handler for inference. If you want to learn how to create a custom Handler for Inference Endpoints, you can either checkout the [documentation](https://huggingface.co/docs/inference-endpoints/guides/custom_handler) or go through [“Custom Inference with Hugging Face Inference Endpoints”](https://www.philschmid.de/custom-inference-handler)
We are going to deploy [philschmid/ControlNet-endpoint](https://huggingface.co/philschmid/ControlNet-endpoint), which implements the following `handler.py` for [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5). The repository is not including the weights and loads the model on endpoint creation. This means you can easily adjust which Stable Diffusion model you want to use by editing the `id` in the [handler](https://huggingface.co/philschmid/ControlNet-endpoint/blob/9fbec2fdc74198b987863895a27bc47619dacc83/handler.py#L64).
The custom handler implements a `CONTROLNET_MAPPING`, allowing us to define different control types on inference type. Supported control types are `canny_edge`, `pose`, `depth`, `scribble`, `segmentation`, `normal`, `hed`, and `though`.
The handler expects the following payload.
```json
{
"inputs": "A prompt used for image generation",
"negative_prompt": "low res, bad anatomy, worst quality, low quality",
"controlnet_type": "depth",
"image": "iVBORw0KGgoAAAANSUhEUgAAAgAAAAIACAIAAAB7GkOtAAAABGdBTUEAALGPC"
}
```
The `image` attribute includes the image as `base64` string. You can additionally provide [hyperparameters](https://huggingface.co/philschmid/ControlNet-endpoint/blob/9fbec2fdc74198b987863895a27bc47619dacc83/handler.py#L94) to customize the pipeline, including `num_inference_steps`, `guidance_scale`, `height` , `width`, and `controlnet_conditioning_scale`.
## 2. Deploy Stable Diffusion ControlNet pipeline as Inference Endpoint
UI: [https://ui.endpoints.huggingface.co/new?repository=philschmid/ControlNet-endpoint](https://ui.endpoints.huggingface.co/new?repository=philschmid/ControlNet-endpoint)
We can now deploy the model as an Inference Endpoint. We can deploy our custom Custom Handler the same way as a regular Inference Endpoint.
Select the repository, the cloud, and the region, adjust the instance and security settings, and deploy.
![repository](/static/blog/stable-diffusion-controlnet-endpoint/repository.png)
Since the weights are not included in the repository, the UI suggests a CPU instance to deploy the model.
We want to change the instance to `GPU [medium] · 1x Nvidia A10G` to get decent performance for our pipeline.
![instance](/static/blog/stable-diffusion-controlnet-endpoint/instance.png)
We can then deploy our model by clicking “Create Endpoint”
## 3. Integrate ControlNet as API and send HTTP requests using Python
We are going to use `requests` to send our requests. (make your you have it installed `pip install requests`). We need to replace the `ENDPOINT_URL` and `HF_TOKEN` with our values and then can send a request. Since we are using it as an API, we need to provide at least a `prompt` and `image`.
To test the API, we download a sample image from the repository
```bash
wget https://huggingface.co/philschmid/ControlNet-endpoint/blob/main/huggingface.png
```
We can now run our python script using the `huggingface.png` to edit the image.
```python
import json
from typing import List
import requests as r
import base64
from PIL import Image
from io import BytesIO
ENDPOINT_URL = "" # your endpoint url
HF_TOKEN = "" # your huggingface token `hf_xxx`
# helper image utils
def encode_image(image_path):
with open(image_path, "rb") as i:
b64 = base64.b64encode(i.read())
return b64.decode("utf-8")
def predict(prompt, image, negative_prompt=None, controlnet_type = "normal"):
image = encode_image(image)
# prepare sample payload
request = {"inputs": prompt, "image": image, "negative_prompt": negative_prompt, "controlnet_type": controlnet_type}
# headers
headers = {
"Authorization": f"Bearer {HF_TOKEN}",
"Content-Type": "application/json",
"Accept": "image/png" # important to get an image back
}
response = r.post(ENDPOINT_URL, headers=headers, json=request)
if response.status_code != 200:
print(response.text)
raise Exception("Prediction failed")
img = Image.open(BytesIO(response.content))
return img
prediction = predict(
prompt = "cloudy sky background lush landscape house and green trees, RAW photo (high detailed skin:1.2), 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3",
negative_prompt ="lowres, bad anatomy, worst quality, low quality, city, traffic",
controlnet_type = "hed",
image = "huggingface.png"
)
prediction.save("result.png")
```
The result of the request should be a `PIL` image we can display:
![huggingface_result.png](/static/blog/stable-diffusion-controlnet-endpoint/huggingface_result.png)
## Conclusion
We successfully created and deployed a ControlNet Stable Diffusion inference handler to Hugging Face Inference Endpoints in less than 30 minutes.
Having scalable, secure API Endpoints will allow you to move from the experimenting (space) to integrated production workloads, e.g., Javascript Frontend/Desktop App and API Backend.
Now, it's your turn! [Sign up](https://ui.endpoints.huggingface.co/new) and create your custom handler within a few minutes!
---
Thanks for reading! If you have any questions, feel free to contact me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
How to use Google Tag Manager and Google Analytics without Cookies | https://www.philschmid.de/how-to-use-google-tag-manager-and-google-analytics-without-cookies | 2020-06-06 | [
"Analytics",
"Web"
] | Connect your user behavior with technical insights without using cookies to improve your customer experience. | _"Web analytics is the measurement, collection, analysis, and reporting of web data for purposes of understanding and
optimizing web usage. However, Web analytics is not just a process for measuring web traffic but can be used as a tool
for business and market research, and to assess and improve the effectiveness of a website."
[Source](https://en.wikipedia.org/wiki/Web_analytics)_
so web analytics enables you to:
- connect your user behavior with technical insights.
- improve your customer experience, by understanding your users and where they might get stuck.
- track the value of expenses through user conversions.
- learn to know your target group and how you can reach them.
- ...
Due to these facts, more than 50 million websites/web apps around the world use analytics tools like Google Analytics.
Most of these tools use cookies to track user behaviors. If you live in Europe you probably have heard of the
[GDPR, the regulation in EU law on data protection and privacy](https://en.wikipedia.org/wiki/General_Data_Protection_Regulation).
Due to the GDPR, it is no longer easy to use cookies for web analytics. I am neither a lawyer nor I want to go into
detail here. If you want to know more about it and be secure you have to talk to a lawyer. Google also provides a
[website with information about it](https://privacy.google.com/businesses/compliance/).
But I can help you learn how to use Google Tag Manager and Google Analytics without cookies.
---
## Google Tag Manager
![Google Tag Manager Logo](/static/blog/how-to-use-google-tag-manager-and-google-analytics-without-cookies/GTM.png)
Google Tag Manager is a free tool, which allows you to manage and deploy marketing/analytics tags (snippets of code) on
your website/web app. These Tags can be used to share information from one data source (e.g. your website) to another
data source (e.g. Google Analytics).
The key components of Google Tag Manager are **Tags**, **Triggers** and **Variables.**
**Tags** are snippets of code, which tell Google Tag Manager what to do. Examples of Tags are Google Analytics, Google
Adwords, Facebook Pixel.
**Triggers** are the way events are handled. They tell Google Tag Manager what to do and when to do it. Examples of
Triggers are `page view`, `window.loaded`, `clicks`, Javascript `errors`, or `custom events` (Javascript functions).
**Variables** are additional information for your Tags and Triggers to work. Examples are DOM elements, click classes,
click text.
Google even has a [video series](https://www.youtube.com/watch?v=9A-i7EWXzjs) on how to get started with Google Tag
Manager.
---
## Purpose
I write this tutorial because I saw a gap in the documentation of Google Tag Manager and how to use it without cookies.
The Google Analytics documentation already includes examples
[on how to use Google Analytics without cookies](https://developers.google.com/analytics/devguides/collection/analyticsjs/cookies-user-id#using_localstorage_to_store_the_client_id)
but not for Google Tag Manager.
Google Tag Manager has different attribute keys when using the Tag Google Analytics than using Google Analytics
standalone. In Google Analytics you have the attribute `clientId` and in Google Tag Manager you have the attribute
`client_id`.
Google is providing a
[list of these field mappings](https://developers.google.com/analytics/devguides/collection/gtagjs/migration) but the
list is missing the important attribute mapping for `storage`, which is needed to prevent the creation of cookies.
---
## Tutorial
In this tutorial, we use Google Analytics as a Tag. Before we get started make sure you have a valid
`GA_MEASUREMENT_ID`. I assume that you have already worked before with Google Analytics and you are therefore familiar
with the terminologies used.
In their
"[getting started with Google Tag Manager](https://developers.google.com/analytics/devguides/collection/gtagjs)" Gooogle
provides a snippet, with which you initialize Google Tag Manager with Google Analytics as Tag. But this would create
cookies for it.
```html
<!-- Global site tag (gtag.js) - Google Analytics -->
<script async src="https://www.googletagmanager.com/gtag/js?id=GA_MEASUREMENT_ID"></script>
<script>
window.dataLayer = window.dataLayer || []
function gtag() {
dataLayer.push(arguments)
}
gtag('js', new Date())
gtag('config', 'GA_MEASUREMENT_ID')
</script>
```
To initialize Google Tag Manager without cookies, we only need to add two attributes to `'config'`.
```html
<!-- Global site tag (gtag.js) - Google Analytics with out cookies -->
<script async src="https://www.googletagmanager.com/gtag/js?id=GA_MEASUREMENT_ID"></script>
<script>
window.dataLayer = window.dataLayer || []
function gtag() {
dataLayer.push(arguments)
}
gtag('js', new Date())
gtag('config', 'GA_MEASUREMENT_ID', {
client_storage: 'none',
client_id: CLIENT_ID,
})
</script>
```
The only issue we face without cookies is that we need to save the `client_id` somewhere. Normally the `client_id`is
saved in the cookie. To overcome this we save the `client_id` in `window.localstorage`. We also need to create the
`values` of the `client_id`.
[The `client_id` is basically the device id](https://support.google.com/analytics/answer/6205850?hl=en).
We can use any value for the `client_id`but to be sure we won`t get any duplicates we use a`uuid` for this.
Below, you find a complete example, which generates a `uuid` for the `client_id` save it in `window.localstorage` and
initializes Google Tag Manager.
```html
<!-- Global site tag (gtag.js) - Google Analytics with out cookies -->
<script async src="https://www.googletagmanager.com/gtag/js?id=GA_MEASUREMENT_ID"></script>
<script async src="https://cdn.jsdelivr.net/npm/uuid@latest/dist/umd/uuidv4.min.js"></script>
<script>
// https://developers.google.com/tag-manager/devguide
window.dataLayer = window.dataLayer || []
function gtag() {
dataLayer.push(arguments)
}
gtag('js', new Date())
// defines window.localstorage key
const GA_LOCAL_STORAGE_KEY = 'ga:clientId'
// checks if localstorage is available
if (window.localStorage) {
// checks if user was already connected and loads client_id from localstorage
if (localStorage.getItem(GA_LOCAL_STORAGE_KEY)) {
// creates new tracker with same client_id as the last time the user visited
gtag('js', new Date())
gtag('config', 'GA_MEASUREMENT_ID', {
send_page_view: true,
client_storage: 'none',
client_id: localStorage.getItem(GA_LOCAL_STORAGE_KEY),
})
} else {
// creates client_id and saves it in localStorage -> currently random number better would be a uuid
window.localStorage.setItem(GA_LOCAL_STORAGE_KEY, uuidv4())
// creates new tracker with the new client_id
gtag('js', new Date())
gtag('config', 'GA_MEASUREMENT_ID', {
send_page_view: true,
client_storage: 'none',
client_id: localStorage.getItem(GA_LOCAL_STORAGE_KEY),
})
}
}
</script>
```
---
Thanks for reading. If you have any questions, feel free to contact me or comment on this article. You can also connect
with me on [Twitter](https://twitter.com/_philschmid) or
[LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Hugging Face Transformers and Habana Gaudi AWS DL1 Instances | https://www.philschmid.de/habana-distributed-training | 2022-07-05 | [
"BERT",
"Habana",
"HuggingFace",
"Optimum"
] | Learn how to learn how to fine-tune XLM-RoBERTa for multi-lingual multi-class text-classification using a Habana Gaudi-based DL1 instance. | In this blog, you will learn how to fine-tune [XLM-RoBERTa](https://huggingface.co/xlm-roberta-large) for multi-lingual multi-class text-classification using a Habana Gaudi-based [DL1 instance](https://aws.amazon.com/ec2/instance-types/dl1/) on AWS to take advantage of the cost performance benefits of Gaudi. We will use the Hugging Faces Transformers, Optimum Habana and Datasets library to fine-tune a pre-trained transformer for multi-class text classification. In particular, we will fine-tune [XLM-RoBERTa-large](https://huggingface.co/xlm-roberta-large) using the [Amazon Science Massive](https://huggingface.co/datasets/AmazonScience/massive) dataset. Before we get started, we need to set up the deep learning environment.
You will learn how to:
1. [Setup Habana Gaudi instance](#1-setup-habana-gaudi-instance)
2. [Load and process the dataset](#2-load-and-process-the-dataset)
3. [Create a `GaudiTrainer` and an run single HPU fine-tuning](#3-create-a-gauditrainer-and-an-run-single-hpu-fine-tuning)
4. [Run distributed data parallel training with `GaudiTrainer`](#4-run-distributed-data-parallel-training-with-gauditrainer)
5. [Cost performance benefits of Habana Gaudi on AWS](#5-cost-performance-benefits-of-habana-gaudi-on-aws)
### Requirements
Before we can start make sure you have met the following requirements
- AWS Account with quota for [DL1 instance type](https://aws.amazon.com/ec2/instance-types/dl1/)
- [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) installed
- AWS IAM user [configured in CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html) with permission to create and manage ec2 instances
## 1. Setup Habana Gaudi instance
In this example are we going to use Habana Gaudi on AWS using the DL1 instance. We already have created a blog post in the past on how to [Setup Deep Learning environment for Hugging Face Transformers with Habana Gaudi on AWS](https://www.philschmid.de/getting-started-habana-gaudi). If you haven't have read this blog post, please read it first and go through the steps on how to setup the environment.
Or if you feel comfortable you can use the `start_instance.sh` in the root of the repository to create your DL1 instance and the continue at step [4. Use Jupyter Notebook/Lab via ssh](https://www.philschmid.de/getting-started-habana-gaudi#4-use-jupyter-notebooklab-via-ssh) in the Setup blog post.
1. run habana docker container an mount current directory
```bash
docker run -ti --runtime=habana -e HABANA_VISIBLE_DEVICES=all -e OMPI_MCA_btl_vader_single_copy_mechanism=none --cap-add=sys_nice --net=host --ipc=host -v $(pwd):/home/ubuntu/dev --workdir=/home/ubuntu/dev vault.habana.ai/gaudi-docker/1.5.0/ubuntu20.04/habanalabs/pytorch-installer-1.11.0:latest
```
2. install juptyer
```bash
pip install jupyter
```
3. clone repository
```bash
git clone https://github.com/philschmid/deep-learning-habana-huggingface.git
cd fine-tuning
```
4. run jupyter notebook
```bash
jupyter notebook --allow-root
# http://localhost:8888/?token=f8d00db29a6adc03023413b7f234d110fe0d24972d7ae65e
```
4. continue here
_**NOTE**: The following steps assume that the code/cells are running on a gaudi instance with access to HPUs_
As first lets make sure we have access to the HPUs.
```python
import habana_frameworks.torch.core as htcore
print(f"device available:{htcore.is_available()}")
print(f"device_count:{htcore.get_device_count()}")
```
next lets install our Hugging Face dependencies and `git-lfs`.
```python
!pip install transformers datasets tensorboard matplotlib pandas sklearn
!pip install git+https://github.com/huggingface/optimum-habana.git # workaround until release of optimum-habana
# we will use git-lfs to upload models and artifacts to the hub.
#! sudo apt-get install git-lfs
!apt-get install git-lfs
```
to finish our setup lets log into the [Hugging Face Hub](https://huggingface.co/models) to push our model artifacts, logs and metrics during training and afterwards to the hub.
_To be able to push our model to the Hub, you need to register on the [Hugging Face](https://huggingface.co/join)._
We will use the `notebook_login` util from the `huggingface_hub` package to log into our account. You can get your token in the settings at [Access Tokens](https://huggingface.co/settings/tokens)
```python
from huggingface_hub import notebook_login
notebook_login()
```
## 2. Load and process the dataset
As Dataset we will use the [AmazonScience/massive](https://huggingface.co/datasets/AmazonScience/massive) a multilingual intent(text)-classification dataset. The dataset contains over 1M utterances across 51 languages with annotations for the Natural Language Understanding tasks of intent prediction and slot annotation.
We are going to use the:
- English - United States (en-US)
- German - Germany (de-DE)
- French - France (fr-FR)
- Italian - Italy (it-IT)
- Portuguese - Portugal (pt-PT)
- Spanish - Spain (es-ES)
- Dutch - Netherlands (nl-NL)
splits. The dataset will have ~80 000 datapoints for training and ~14 000 for evaluation equally split across the different languages.
The Model which we will fine-tune is [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) a multilingual RoBERTa model. It was pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages.
```python
model_id = "xlm-roberta-large"
gaudi_config_id= "Habana/roberta-large" # more here: https://huggingface.co/Habana
dataset_id = "AmazonScience/massive"
dataset_configs=["en-US","de-DE","fr-FR","it-IT","pt-PT","es-ES","nl-NL"]
seed=33
repository_id = "habana-xlm-r-large-amazon-massive"
```
You can change these configuration to your needs, e.g. the `model_id` to another BERT-like model for a different language, e.g. `BERT-Large`.
_**NOTE:** Not all 100+ transformers architectures are currently support by `optimum-habana` you can find a list of supported archtiectures in the [validated models](https://github.com/huggingface/optimum-habana#validated-models) section_
We use the `datasets` library to download and preprocess our dataset. As a frist we will load a 7 different configurations and remove the unnecessary features/columns and the concatenate them into a single dataset.
```python
from datasets import load_dataset, concatenate_datasets, DatasetDict
# the columns we want to keep in the dataset
keep_columns = ["utt", "scenario"]
# process individuell datasets
proc_lan_dataset_list=[]
for lang in dataset_configs:
# load dataset for language
lang_ds = load_dataset(dataset_id, lang)
# only keep the 'utt' & 'scenario column
lang_ds = lang_ds.remove_columns([col for col in lang_ds["train"].column_names if col not in keep_columns])
# rename the columns to match transformers schema
lang_ds = lang_ds.rename_column("utt", "text")
lang_ds = lang_ds.rename_column("scenario", "label")
proc_lan_dataset_list.append(lang_ds)
# concat single splits into one
train_dataset = concatenate_datasets([ds["train"] for ds in proc_lan_dataset_list])
eval_dataset = concatenate_datasets([ds["validation"] for ds in proc_lan_dataset_list])
# create datset dict for easier processing
dataset = DatasetDict(dict(train=train_dataset,validation=eval_dataset))
print(dataset)
```
Before we prepare the dataset for training. Lets take a quick look at the class distribution of the dataset.
```python
import pandas as pd
df = dataset["train"].to_pandas()
df.hist()
```
![distribution](/static/blog/habana-distributed-training/distribution.png)
To train our model we need to convert our "Natural Language" to token IDs. This is done by a 🤗 Transformers Tokenizer which will tokenize the inputs (including converting the tokens to their corresponding IDs in the pretrained vocabulary). If you are not sure what this means check out [chapter 6](https://huggingface.co/course/chapter6/1?fw=tf) of the Hugging Face Course.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id)
```
Additionally we add the `truncation=True` and `padding=max_length` to align the length and truncate texts that are bigger than the maximum size allowed by the model.
```python
def process(examples):
tokenized_inputs = tokenizer(
examples["text"], padding="max_length", truncation=True
)
return tokenized_inputs
tokenized_datasets = dataset.map(process, batched=True)
tokenized_datasets["train"].features
```
Now that our `dataset` is processed, we can download the pre-trained model and fine-tune it.
## 3. Create a `GaudiTrainer` and an run single HPU fine-tuning
Normally you would use the [Trainer](https://huggingface.co/docs/transformers/v4.19.4/en/main_classes/trainer#transformers.Trainer) and [TrainingArguments](https://huggingface.co/docs/transformers/v4.19.4/en/main_classes/trainer#transformers.TrainingArguments) to fine-tune a pytorch-based transformer model. Since we are using the `optimum-habana` library, we can use the [GaudiTrainer]() and [GaudiTrainingArguments]() instead. The `GaudiTrainer` is a wrapper around the [Trainer](https://huggingface.co/docs/transformers/v4.19.4/en/main_classes/trainer#transformers.Trainer) and [TrainingArguments](https://huggingface.co/docs/transformers/v4.19.4/en/main_classes/trainer#transformers.TrainingArguments) that allows you to fine-tune a transformer model on a gaudi instance, with a similar API to the [Trainer](https://huggingface.co/docs/transformers/v4.19.4/en/main_classes/trainer#transformers.Trainer) and [TrainingArguments](https://huggingface.co/docs/transformers/v4.19.4/en/main_classes/trainer#transformers.TrainingArguments) classes. Below you see how easy it is to migrate from the [Trainer](https://huggingface.co/docs/transformers/v4.19.4/en/main_classes/trainer#transformers.Trainer) and [TrainingArguments](https://huggingface.co/docs/transformers/v4.19.4/en/main_classes/trainer#transformers.TrainingArguments) classes to the [GaudiTrainer]() and [GaudiTrainingArguments]() classes.
```diff
-from transformers import Trainer, TrainingArguments
+from optimum.habana import GaudiTrainer, GaudiTrainingArguments
# define the training arguments
-training_args = TrainingArguments(
+training_args = GaudiTrainingArguments(
+ use_habana=True,
+ use_lazy_mode=True,
+ gaudi_config_name=path_to_gaudi_config,
...
)
# Initialize our Trainer
-trainer = Trainer(
+trainer = GaudiTrainer(
model=model,
args=training_args,
train_dataset=train_dataset
... # other arguments
)
```
Before we create our `GaudiTrainer` we first need to define a `compute_metrics` function to evaluate our model on the test set. This function will be used during the training process to compute the `accuracy` & `f1` of our model.
```python
from datasets import load_metric
import numpy as np
# define metrics and metrics function
f1_metric = load_metric("f1")
accuracy_metric = load_metric( "accuracy")
def compute_metrics(eval_pred):
predictions, labels = eval_pred
predictions = np.argmax(predictions, axis=1)
acc = accuracy_metric.compute(predictions=predictions, references=labels)
f1 = f1_metric.compute(predictions=predictions, references=labels, average="micro")
return {
"accuracy": acc["accuracy"],
"f1": f1["f1"],
}
```
Hyperparameter Definition, Model Loading
```python
from transformers import AutoModelForSequenceClassification,DataCollatorWithPadding
from optimum.habana import GaudiTrainer, GaudiTrainingArguments
from huggingface_hub import HfFolder
# create label2id, id2label dicts for nice outputs for the model
labels = tokenized_datasets["train"].features["label"].names
num_labels = len(labels)
label2id, id2label = dict(), dict()
for i, label in enumerate(labels):
label2id[label] = str(i)
id2label[str(i)] = label
# define training args
training_args = GaudiTrainingArguments(
output_dir=repository_id,
use_habana=True,
use_lazy_mode=True,
gaudi_config_name=gaudi_config_id,
num_train_epochs=5,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
)
# define model
model = AutoModelForSequenceClassification.from_pretrained(
model_id,
num_labels=num_labels,
id2label=id2label,
label2id=label2id,
)
# create Trainer
trainer = GaudiTrainer(
model=model,
args=training_args,
train_dataset=tokenized_datasets["train"],
eval_dataset=tokenized_datasets["validation"],
tokenizer=tokenizer,
compute_metrics=compute_metrics,
)
# start training on 1x HPU
trainer.train()
# evaluate model
trainer.evaluate(eval_dataset=tokenized_datasets["validation"])
```
## 4. Run distributed data parallel training with `GaudiTrainer`
running the training only on a single HPU-core takes way to long (5h). Luckily with `DL1` instance we have 8 available HPU-cores meaning we can leverage distributed training.
To run our training as distributed training we need to create a training script, which can be used with multiprocessing to run on all HPUs.
We have created a `scripts/train.py` which contains all the previous steps of the example so far. To executed our distributed training we use the `DistributedRunner` runner from `optimum-habana` alternatively you could check-out the [gaudi_spawn.py](https://github.com/huggingface/optimum-habana/blob/main/examples/gaudi_spawn.py) in the [optimum-habana](https://github.com/huggingface/optimum-habana) repository.
```python
from optimum.habana.distributed import DistributedRunner
from optimum.utils import logging
world_size=8 # Number of HPUs to use (1 or 8)
# define distributed runner
distributed_runner = DistributedRunner(
command_list=["scripts/train.py --use_habana True"],
world_size=world_size,
use_mpi=True,
multi_hls=False,
)
# start job
ret_code = distributed_runner.run()
```
## 5. Cost performance benefits of Habana Gaudi on AWS
The distributed training on all 8x HPUs took in total 52 minutes. The [dl1.24xlarge](https://aws.amazon.com/ec2/instance-types/dl1/) instance on AWS costs \$13.11 per hour leading to only \$11,55 for our training.
To provide a cost-performance comparison we run the same training on the AWS [p3.8xlarge](https://aws.amazon.com/ec2/instance-types/p3/?nc1=h_ls) instance, which costs roughly the same with \$12.24, but only has 4x accelerators (4x NVIDIA V100). The training on the p3.8xlarge instance took in total about 439 minutes and cost \$89.72.
Meaning the Habana Gaudi instance is **8.4x faster** and **7.7x cheaper** than the price equivalent NVIDIA powered instance.
Below is a detailed table of results. Additional both models are available on the Hugging Face Hub at [philschmid/habana-xlm-r-large-amazon-massive](https://huggingface.co/philschmid/habana-xlm-r-large-amazon-massive) and [philschmid/gpu-xlm-roberta-large-amazon-massive](https://huggingface.co/philschmid/gpu-xlm-roberta-large-amazon-massive)
| accelerator | training time (in minutes) | total cost | total batch size | aws instance type | instance price per hour |
| ------------------ | -------------------------- | ---------- | ---------------- | -------------------------------------------------------------------- | ----------------------- |
| Habana Gaudi (HPU) | 52.6 | $11.55 | 64 | [dl1.24xlarge](https://aws.amazon.com/ec2/instance-types/dl1/) | $13.11 |
| NVIDIA V100 (GPU) | 439.8 | $89.72 | 4 | [p3.8xlarge](https://aws.amazon.com/ec2/instance-types/p3/?nc1=h_ls) | $12.24 |
![comparison](/static/blog/habana-distributed-training/comparison.png)
_Note: This comparison currently provides a limited view, since the NVIDIA V100 might not be the best GPU available for training such a large transformer model resulting in a 8x smaller batch size. We plan to run a more detailed cost-performance benchmark including more instances, like NVIDIA A100 and more models, e.g. DistilBERT, GPT-2_
## Conclusion
That's it for this tutorial. Now you know how to fine-tune Hugging Face Transformers on Habana Gaudi using Optimum. You learned how easily you can migrate from a `Trainer` based script to a `GaudiTrainer` based script and how to scale the training to multiple HPUs using the `DistributedRunner`.
Additionally, we run a simple cost performance benchmark acheiving **8.4x faster** and **7.7x cheaper** training on Habana Gaudi instance than on the price equivalent NVIDIA powered instance.
Now it is time for you to migrate your training scripts!!
---
Thanks for reading. If you have any questions, feel free to contact me, through [Github](https://github.com/huggingface/transformers), or on the [forum](https://discuss.huggingface.co/c/sagemaker/17). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Set up a CI/CD Pipeline for your Web app on AWS with Github Actions | https://www.philschmid.de/set-up-a-ci-cd-pipeline-for-your-web-app-on-aws-s3-with-github-actions | 2020-03-25 | [
"AWS",
"Vue",
"Github"
] | Automatic deploy your React, Vue, Angular or Svelte app on s3 and create a cache Invalidation with Github Actions. | Nat Friedman described Github Actions as an API *“… to orchestrate any workflow, based on any event, while GitHub
manages the execution, provides rich feedback and secures every step along the way. With GitHub Actions, workflows and
steps are just code in a repository, so you can create, share, reuse, and fork your software development practices.”*
You can read his full blog post [here](https://github.blog/2019-08-08-github-actions-now-supports-ci-cd/).
This blog post explains how to set up a GitHub action within 5 minutes to automatically deploy your hosted web app on S3
and create an automatic CloudFront cache Invalidation. You will be able to deploy any app that runs on S3 be it React,
Vue, Angular or svelte.
This Action is using 2 community-built actions from
j[akejarvis](https://github.com/jakejarvis/s3-sync-action) and [chetan](https://github.com/chetan/invalidate-cloudfront-action).
---
## TL;DR
If you don´t want to read the complete post. Copy the action from this Github repository and add at the Github secrets
to your repository. If you fail, come back and read the article!
---
## Requirements
This post assumes that you have already deployed a working web app on s3 with CloudFront distribution before. So the
requirements are a working web app, with `build` script in the `package.json`, a static hosting bucket on s3, a working
CloudFront distribution and IAM User with programmatic access and enough permissions to deploy to s3 and create a
CloudFront chance invalidation
Now let’s get started with the tutorial.
---
## Create folders & files
The first thing we have to do is create the folder `.github` with a folder `workflows` in it on your project root level.
Afterward create the `deploy-app-on-s3.yaml` file in it.
## Creating the Github Action
Copy this code snippet into the `deploy-app-on-s3.yaml` file.
```yaml
name: deploy-app-on-s3
on:
pull_request:
branches: [master]
types: [closed]
jobs:
deploy:
runs-on: ubuntu-latest
env:
AWS_S3_BUCKET_NAME: your-bucket-name
AWS_CF_DISTRIBUTION_ID: your-cloudfront-id
strategy:
matrix:
node-version: [10.x]
steps:
- uses: actions/checkout@master
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node@v1
with:
node-version: ${{ matrix.node-version }}
- name: Install Dependencies
run: npm install
- name: Build Application
run: npm run-script build
- uses: jakejarvis/s3-sync-action@master
name: Upload App to S3 Bucket
with:
args: --follow-symlinks --delete --cache-control max-age=2592000
env:
AWS_S3_BUCKET: ${{ env.AWS_S3_BUCKET_NAME }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_REGION: 'eu-central-1'
SOURCE_DIR: 'dist'
- name: Create CloudFront Cache Invalidation
uses: chetan/invalidate-cloudfront-action@master
env:
DISTRIBUTION: ${{ env.AWS_CF_DISTRIBUTION_ID }}
PATHS: '/*'
AWS_REGION: 'eu-central-1'
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
```
This code snippet describes the action. The Github Action will be triggered after your pull request on the `master`
branch is successfully closed. You can change this by adjusting the `on` section in the snippet. If you want a different
trigger for your action look [here](https://help.github.com/en/actions/reference/events-that-trigger-workflows). It
could be possible that you have to adapt for example the `SOURCE_DIR` from "dist" to your build directory or the
`AWS_REGION`.
## Adjust environment variables
The third step is to adjust all environment variables. In this action, we have the bucket name `AWS_S3_BUCKET_NAME` and
the CloudFront distribution ID `AWS_CF_DISTRIBUTION_ID` as an environment variable. The value of `AWS_S3_BUCKET_NAME` is
the name of your S3 Bucket you can find in the management console and the value of `AWS_CF_DISTRIBUTION_ID` is the id of
the CloudFront distribution.
You can get the ID for the `AWS_CF_DISTRIBUTION_ID` variable of the CloudFront distribution via the management console
by navigating to the "CloudFront" service and then going on "Distribution".
![AWS Management Console](/static/blog/set-up-a-ci-cd-pipeline-for-your-web-app-on-aws-s3-with-github-actions/console.png)
The table has a column "ID" with the value we need. You can recognize the correct row by identifying your S3 Bucket name
in the column "origin".
![AWS Cloudfront Service](/static/blog/set-up-a-ci-cd-pipeline-for-your-web-app-on-aws-s3-with-github-actions/cloudfront.png)
## Add secrets to your repository
The fourth and last step is to add secrets to your Github repository. For this Github Action, we need the access key ID
and secret access key from IAM User as secrets called `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`.
If you are not sure on how to create an IAM user for the access key ID and secret access key you can read
[here](https://serverless-stack.com/chapters/create-an-iam-user.html).
### Adding the secrets
To add the secrets you have to go the "settings" tab of your repository.
![Github Repository Naivgation](/static/blog/set-up-a-ci-cd-pipeline-for-your-web-app-on-aws-s3-with-github-actions/navigation.png)
then go to secrets in the left navigation
![Github Repository Settings](/static/blog/set-up-a-ci-cd-pipeline-for-your-web-app-on-aws-s3-with-github-actions/settings.png)
and on the secrets page, you can add your 2 secrets `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY.`
![Github Repository Secrets](/static/blog/set-up-a-ci-cd-pipeline-for-your-web-app-on-aws-s3-with-github-actions/secrets.png)
## Crab a coffee and enjoy it
Lastly, you have to create a pull request from a feature branch into master and watch your action deploying your app to
s3 and creating a cache invalidation.
![Github Action](/static/blog/set-up-a-ci-cd-pipeline-for-your-web-app-on-aws-s3-with-github-actions/github-action.png)
---
I created a demo repository with a vue app as example. You can find the repository
[here](https://github.com/philschmid/blog-github-action-cicd-aws-s3). If something is unclear let me know and i will
adjust it. |
New Serverless BERT with Huggingface, AWS Lambda, and AWS EFS | https://www.philschmid.de/new-serverless-bert-with-huggingface-aws-lambda | 2020-11-15 | [
"AWS",
"Serverless",
"Bert"
] | Build a serverless Question-Answering API using the Serverless Framework, AWS Lambda, AWS EFS, efsync, Terraform, the transformers Library from HuggingFace, and a `mobileBert` model from Google fine-tuned on SQuADv2. | 4 months ago I wrote the article
["Serverless BERT with HuggingFace and AWS Lambda"](https://www.philschmid.de/serverless-bert-with-huggingface-and-aws-lambda),
which demonstrated how to use BERT in a serverless way with AWS Lambda and the Transformers Library from HuggingFace.
In this article, I already predicted that
_["BERT and its fellow friends RoBERTa, GPT-2, ALBERT, and T5 will drive business and business ideas in the next few years and will change/disrupt business areas like the internet once did."](https://www.philschmid.de/serverless-bert-with-huggingface-and-aws-lambda)_
Since then the usage of BERT in Google Search increased from
[10% of English](https://www.blog.google/products/search/search-language-understanding-bert/) queries to almost
[100% of English-based queries](https://searchon.withgoogle.com/). But that's not it. Google powers now over
[70 languages with BERT for Google Search](https://twitter.com/searchliaison/status/1204152378292867074).
![google-search-snippet](/static/blog/new-serverless-bert-with-huggingface-aws-lambda/google-search-snippet.png)
[https://youtu.be/ZL5x3ovujiM?t=484](https://youtu.be/ZL5x3ovujiM?t=484)
In this article, we are going to tackle all the drawbacks from my previous article like model load time, and dependency
size, and usage.
We are going to build the same "Serverless BERT powered Question-Answering API" as last time. But instead of using
compressing techniques to fit our Python dependencies into our AWS Lambda function, we are using a tool called
[efsync](https://github.com/philschmid/efsync). I built efsync to automatically upload dependencies to an AWS EFS
filesystem and then mount them into our AWS Lambda function. This allows us to include our machine learning model into
our function without the need to load it from S3.
## TL;DR;
We are going to build a serverless Question-Answering API using the [Serverless Framework](https://www.serverless.com/),
[AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html), [AWS EFS](https://aws.amazon.com/en/efs/),
[efsync](https://github.com/philschmid/efsync), [Terraform](https://www.terraform.io/), the
[transformers](https://github.com/huggingface/transformers) Library from HuggingFace, and a `mobileBert` model from
Google fine-tuned on [SQuADv2](https://rajpurkar.github.io/SQuAD-explorer/explore/v2.0/dev/).
You find the complete code for it in this
[Github repository](https://github.com/philschmid/new-serverless-bert-aws-lambda).
---
## Serverless Framework
![serverless-logo](/static/blog/new-serverless-bert-with-huggingface-aws-lambda/serverless-logo.png)
[The Serverless Framework](https://www.serverless.com/) helps us develop and deploy AWS Lambda functions. It’s a CLI
that offers structure, automation, and best practices right out of the box.
---
## AWS Lambda
![aws-lambda-logo](/static/blog/new-serverless-bert-with-huggingface-aws-lambda/lambda-logo.png)
[https://aws.amazon.com/de/lambda/features/](https://aws.amazon.com/de/lambda/features/)
[AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a serverless computing service that lets you
run code without managing servers. It executes your code only when required and scales automatically, from a few
requests per day to thousands per second.
---
## Amazon Elastic File System (EFS)
[Amazon EFS](https://aws.amazon.com/de/efs/) is a fully-managed service that makes it easy to set up, scale, and
cost-optimize file storage in the Amazon Cloud. Since June 2020 you can mount AWS EFS to AWS Lambda functions
---
## Efsync
[Efsync](https://github.com/philschmid/efsync) is a CLI/SDK tool, which automatically syncs files and dependencies to
AWS EFS. It enables you to install dependencies with the AWS Lambda runtime directly into your EFS filesystem and use
them in your AWS Lambda function.
---
## Terraform
![terraform-logo](/static/blog/new-serverless-bert-with-huggingface-aws-lambda/terraform-logo.png)
[https://www.terraform.io/logos.html](https://www.terraform.io/logos.html)
[Terraform](https://www.terraform.io/) is an Infrastructure as Code (IaC) tool for building cloud-native infrastructure
safely and efficiently. Terraform enables you to use HCL (HashiCorp Configuration Language) to describe your
cloud-native infrastructure.
---
## Transformers Library by Huggingface
![huggingface-logo](/static/blog/new-serverless-bert-with-huggingface-aws-lambda/huggingface-logo.png)
The [Transformers library](https://github.com/huggingface/transformers) provides state-of-the-art machine learning
architectures like BERT, GPT-2, RoBERTa, XLM, DistilBert, XLNet, T5 for Natural Language Understanding (NLU) and Natural
Language Generation (NLG). It also provides thousands of pre-trained models in 100+ different languages.
---
## The Architecture
![blog-serverless-architecture](/static/blog/new-serverless-bert-with-huggingface-aws-lambda/blog-serverless-architectures.png)
## Tutorial
Before we get started, make sure you have the [Serverless Framework](https://serverless.com/) and
[Terraform](https://www.terraform.io/) configured and set up. Furthermore, you need access to an AWS Account to create
an EFS Filesystem, API Gateway, and the AWS Lambda function.
In the tutorial, we are going to build a Question-Answering API with a pre-trained `BERT` model from Google.
We are going to send a context (small paragraph) and a question to the lambda function, which will respond with the
answer to the question.
**What are we going to do:**
- create the required infrastructure using `terraform`.
- use `efsync` to upload our Python dependencies to AWS EFS.
- create a Python Lambda function with the Serverless Framework.
- add the `BERT`model to our function and create an inference pipeline.
- Configure the `serverless.yaml`, add EFS and set up an API Gateway for inference.
- deploy & test the function.
You will need a new IAM user called `serverless-bert` with `AdministratorAccess` and configured it with the AWS CLI
using `aws configure --profile serverless-bert`. This IAM user is used in the complete tutorial. If you don´t know how
to do this check out this [link](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html).
_**Note:** I don´t recommend create a IAM User for production usage with `AdministratorAccess`_
---
Before we start, I want to say that we're not gonna go into detail for every step. If you want to understand more about
how to use Deep Learning in AWS Lambda I suggest you check out my other articles:
- [Scaling Machine Learning from ZERO to HERO](https://www.philschmid.de/scaling-machine-learning-from-zero-to-hero)
- [How to Set Up a CI/CD Pipeline for AWS Lambda With GitHub Actions and Serverless](https://www.philschmid.de/how-to-set-up-a-ci-cd-pipeline-for-aws-lambda-with-github-actions-and-serverless)
- [Serverless BERT with HuggingFace and AWS Lambda](https://www.philschmid.de/serverless-bert-with-huggingface-and-aws-lambda)
- [efsync my first open-source MLOps toolkit](https://www.philschmid.de/efsync-my-first-open-source-mlops-toolkit)
You find the complete code in this [Github repository](https://github.com/philschmid/new-serverless-bert-aws-lambda).
---
## Create the required infrastructure using `terraform`
At first, we define and create the required infrastructure using terraform. If you haven´t set it up you can check out
this [tutorial](https://learn.hashicorp.com/tutorials/terraform/install-cli?in=terraform/aws-get-started).
As infrastructure, we need an AWS EFS filesystem, an access point, and a mount target to be able to use it in our AWS
Lambda function. We could also create a VPC, but for the purpose of this tutorial, we are going to use the default VPC
and his subnets.
Next, we create a directory `serverless-bert/`, which contains all code for this tutorial with a subfolder `terraform/`
including our `main.tf` file.
```bash
mkdir serverless-bert serverless-bert/terraform && touch serverless-bert/terraform/main.tf
```
Afterwards, we open the `main.tf` with our preferred IDE and add the terraform resources. I provided a basic template
for all of them. If you want to customize them or add extra resources check out the
[documentation](https://registry.terraform.io/providers/hashicorp/aws/latest/docs) for all possibilities.
```bash
# provider
provider "aws" {
region = "eu-central-1"
shared_credentials_file = "~/.aws/credentials"
profile = "serverless-bert"
}
# get all available availability zones
data "aws_vpc" "default" {
default = true
}
data "aws_subnet_ids" "subnets" {
vpc_id = data.aws_vpc.default.id
}
# EFS File System
resource "aws_efs_file_system" "efs" {
creation_token = "serverless-bert"
}
# Access Point
resource "aws_efs_access_point" "access_point" {
file_system_id = aws_efs_file_system.efs.id
}
# Mount Targets
resource "aws_efs_mount_target" "efs_targets" {
for_each = data.aws_subnet_ids.subnets.ids
subnet_id = each.value
file_system_id = aws_efs_file_system.efs.id
}
#
# SSM Parameter for serverless
#
resource "aws_ssm_parameter" "efs_access_point" {
name = "/efs/accessPoint/id"
type = "String"
value = aws_efs_access_point.access_point.id
overwrite = true
}
```
To change the name of EFS you can edit the value `creation_token` in the `aws_efs_filesystem` resource. Otherwise, the
name of the EFS will be "serverless-bert". Additionally, we create an SSM parameter for the `efs_access_point_id` at the
end to use it later in our `serverless.yaml`.
To use terraform we first run `terraform init` to initialize our project and provider (AWS). Be aware we have to be in
the `terraform/` directory.
```bash
terraform init
```
Afterwards, we check our IaC definitions with `terraform plan`
```bash
terraform plan
```
When this is complete we create our infrastructure with `terraform apply`
```bash
terraform apply
```
![terraform-apply](/static/blog/new-serverless-bert-with-huggingface-aws-lambda/terraform-apply.png)
---
## Use `efsync` to upload our Python dependencies to AWS EFS
The next step is to add and install our dependencies on our AWS EFS filesystem. Therefore we use a tool called `efsync`.
I created [efsync](https://github.com/philschmid/efsync) to install dependencies with the AWS Lambda runtime directly
into your EFS filesystem and use them in your AWS Lambda function.
install efsync by running `pip3 install efsync`
```bash
pip3 install efsync
```
After it is installed we create a `requirements.txt` in our root directory `serverless-bert/` and add our dependencies
to it.
```
https://download.pytorch.org/whl/cpu/torch-1.5.0%2Bcpu-cp38-cp38-linux_x86_64.whl
transformers==3.4.0
```
Efsync provides different [configurations](https://github.com/philschmid/efsync#sdk). This time we use the CLI with a
`yaml` configuration. For that, we create an `efsync.yaml` file in our root directory.
```yaml
#standard configuration
efs_filesystem_id: <efs-filesystem-id> # aws efs filesystem id
subnet_Id: <subnet-id-of-mount-target> # subnet of which the efs is running in
ec2_key_name: efsync-asd913fjgq3 # required key name for starting the ec2 instance
clean_efs: all # Defines if the EFS should be cleaned up before. values: `'all'`,`'pip'`,`'file'` uploading
# aws profile configuration
aws_profile: serverless-bert # aws iam profile with required permission configured in .aws/credentials
aws_region: eu-central-1 # the aws region where the efs is running
# pip dependencies configurations
efs_pip_dir: lib # pip directory on ec2
python_version: 3.8 # python version used for installing pip dependencies -> should be used as lambda runtime afterwads
requirements: requirements.txt # path + file to requirements.txt which holds the installable pip dependencies
```
Here we have to adjust the values of `efs_filesystem_id` and `subnet_Id`. Get these values by either looking them up in
the management console or using these two CLI commands.
![console-efs-id](/static/blog/new-serverless-bert-with-huggingface-aws-lambda/console-efs-id.png)
```yaml
aws efs describe-file-systems --creation-token serverless-bert --profile serverless-bert
```
![efs-id](/static/blog/new-serverless-bert-with-huggingface-aws-lambda/efs-id.png)
Beware that if you changed the `creation_token` earlier you have to adjust it here.
![console-mount-targets](/static/blog/new-serverless-bert-with-huggingface-aws-lambda/console-mount-targets.png)
```yaml
aws efs describe-mount-targets --file-system-id <filesystem-id> --profile serverless-bert
```
![mount-targets](/static/blog/new-serverless-bert-with-huggingface-aws-lambda/mount-targets.png)
You can choose one of your `subnet_Ids` for the `efsync.yaml` configuration. If you want to learn more about the
configuration options, you can read more [here](https://github.com/philschmid/efsync).
After the configuration of our `efsync.yaml` we run `efsync -cf efsync.yaml` to install our Python dependencies on our
AWS EFS filesystem. This will take around 5-10 Minutes.
```python
efsync -cf efsync.yaml
```
![efsync](/static/blog/new-serverless-bert-with-huggingface-aws-lambda/efsync.png)
---
## Create a Python Lambda function with the Serverless Framework
Third, we create our AWS Lambda function by using the Serverless CLI with the `aws-python3` template.
```bash
serverless create --template aws-python3 --path function
```
This CLI command will create a new directory containing a `handler.py`, `.gitignore`, and `serverless.yaml` file. The
`handler.py` contains some basic boilerplate code.
```python
import json
def hello(event, context):
body = {
"message": "Go Serverless v1.0! Your function executed successfully!",
"input": event
}
response = {
"statusCode": 200,
"body": json.dumps(body)
}
return response
```
---
## Add the `BERT`model to our function and create an inference pipeline
Since we are not including our Python dependencies into our AWS Lambda function, we have around 250MB of Storage to use
for our model files. For those who are not that familiar with AWS Lambda and its limitations, you can check out this
[link](https://www.notion.so/add-the-mobileBERTmodel-from-to-our-function-and-create-an-inference-pipeline-b5530c56acb7437c8ef1a395c4436b7d).
_If you want to use models, which are bigger than 250MB you could use efsync to upload them to EFS and then load them
from there. Read more [here](https://www.philschmid.de/efsync-my-first-open-source-mlops-toolkit)._
To add our `BERT` model to our function we have to load it from the
[model hub of HuggingFace](https://huggingface.co/models). For this, I have created a python script. Before we can
execute this script we have to install the `transformers` library to our local environment and create a `model`
directory in our `function/` directory.
```yaml
mkdir model function/model
```
```yaml
pip3 install torch==1.5.0 transformers==3.4.0
```
After we installed `transformers` we create `get_model.py` file in the `function/` directory and include the script
below.
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
def get_model(model):
"""Loads model from Hugginface model hub"""
try:
model = AutoModelForQuestionAnswering.from_pretrained(model,use_cdn=True)
model.save_pretrained('./model')
except Exception as e:
raise(e)
def get_tokenizer(tokenizer):
"""Loads tokenizer from Hugginface model hub"""
try:
tokenizer = AutoTokenizer.from_pretrained(tokenizer)
tokenizer.save_pretrained('./model')
except Exception as e:
raise(e)
get_model('mrm8488/mobilebert-uncased-finetuned-squadv2')
get_tokenizer('mrm8488/mobilebert-uncased-finetuned-squadv2')
```
To execute the script we run `python3 get_model.py` in the `function/` directory.
```python
python3 get_model.py
```
_**Tip**: add the `model` directory to gitignore._
The next step is to adjust our `handler.py` and include our `serverless_pipeline()`.
At first, we add all the required imports and our EFS Filesystem to the `PYTHONPATH` so we can import our dependencies
from there. Therefore we use `sys.path.append(os.environ['EFS_PIP_PATH'])`. We will define the `EFS_PIP_PATH` later in
our `serverless.yaml`.
We create `serverless_pipeline()` function, which initializes our model and tokenizer and returns a `predict` function,
we can use in our `handler`.
```python
import sys
import os
# adds EFS Filesystem to our PYTHONPATH
sys.path.append(os.environ['EFS_PIP_PATH']) # nopep8 # noqa
import json
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, AutoConfig
def encode(tokenizer, question, context):
"""encodes the question and context with a given tokenizer"""
encoded = tokenizer.encode_plus(question, context)
return encoded["input_ids"], encoded["attention_mask"]
def decode(tokenizer, token):
"""decodes the tokens to the answer with a given tokenizer"""
answer_tokens = tokenizer.convert_ids_to_tokens(
token, skip_special_tokens=True)
return tokenizer.convert_tokens_to_string(answer_tokens)
def serverless_pipeline(model_path='./model'):
"""Initializes the model and tokenzier and returns a predict function that ca be used as pipeline"""
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForQuestionAnswering.from_pretrained(model_path)
def predict(question, context):
"""predicts the answer on an given question and context. Uses encode and decode method from above"""
input_ids, attention_mask = encode(tokenizer,question, context)
start_scores, end_scores = model(torch.tensor(
[input_ids]), attention_mask=torch.tensor([attention_mask]))
ans_tokens = input_ids[torch.argmax(
start_scores): torch.argmax(end_scores)+1]
answer = decode(tokenizer,ans_tokens)
return answer
return predict
# initializes the pipeline
question_answering_pipeline = serverless_pipeline()
def handler(event, context):
try:
# loads the incoming event into a dictonary
body = json.loads(event['body'])
# uses the pipeline to predict the answer
answer = question_answering_pipeline(question=body['question'], context=body['context'])
return {
"statusCode": 200,
"headers": {
'Content-Type': 'application/json',
'Access-Control-Allow-Origin': '*',
"Access-Control-Allow-Credentials": True
},
"body": json.dumps({'answer': answer})
}
except Exception as e:
print(repr(e))
return {
"statusCode": 500,
"headers": {
'Content-Type': 'application/json',
'Access-Control-Allow-Origin': '*',
"Access-Control-Allow-Credentials": True
},
"body": json.dumps({"error": repr(e)})
}
```
---
## Configure the `serverless.yaml`, add EFS, and set up an API Gateway for inference.
I provide the complete `serverless.yaml`for this example, but we go through all the details we need for our
EFS-filesystem and leave out all standard configurations. If you want to learn more about the `serverless.yaml`, I
suggest you check out
[Scaling Machine Learning from ZERO to HERO](https://www.philschmid.de/scaling-machine-learning-from-zero-to-hero). In
this article, I went through each configuration and explain the usage of them.
```yaml
service: new-serverless-bert-lambda
plugins:
- serverless-pseudo-parameters
custom:
efsAccessPoint: ${ssm:/efs/accessPoint/id}
LocalMountPath: /mnt/efs
efs_pip_path: /lib
provider:
name: aws
runtime: python3.8
region: eu-central-1
memorySize: 3008 # optional, in MB, default is 1024
timeout: 60 # optional, in seconds, default is 6
environment: # Service wide environment variables
MNT_DIR: ${self:custom.LocalMountPath}
EFS_PIP_PATH: '${self:custom.LocalMountPath}${self:custom.efs_pip_path}'
iamManagedPolicies:
- arn:aws:iam::aws:policy/AmazonElasticFileSystemClientReadWriteAccess
package:
exclude:
- test/**
- lib/**
- terraform/**
- node_modules/**
- .vscode/**
- .serverless/**
- .pytest_cache/**
- __pychache__/**
functions:
questionanswering:
handler: handler.handler
fileSystemConfig:
localMountPath: ${self:custom.LocalMountPath}
arn: 'arn:aws:elasticfilesystem:${self:provider.region}:#{AWS::AccountId}:access-point/${self:custom.efsAccessPoint}'
vpc:
securityGroupIds:
- <your-default-security-group-id>
subnetIds:
- <your-default-subnet-id>
- <your-default-subnet-id>
- <your-default-subnet-id>
events:
- http:
path: qa
method: post
```
We need to install the `serverless-pseudo-parameters` plugin with the following command.
```yaml
npm install serverless-pseudo-parameters
```
We use the `serverless-pseudo-parameters` plugin to get our `AWS::AccountID` referenced in the `serverless.yaml`. All
custom needed variables are referenced under `custom` or in our `functions` section.
**custom:**
- `efsAccessPoint` should be the value of your EFS access point. Here we use our SSM parameter created earlier by our
`terraform` templates.
- `LocalMountPath` is the path under which EFS is mounted in the AWS Lambda function.
- `efs_pip_path` is the path under which we installed our Python dependencies using `efsync`.
**functions**
- `securityGroupIds` can be any security group in the AWS account. We use the `default` security group id. This one
should look like this `sg-1018g448`.
```bash
aws ec2 describe-security-groups --filters Name=description,Values="default VPC security group" --profile serverless-bert
```
- `subnetsId` should have the same id as the EFS-filesystem. They should look like this `subnet-8f9a7de5`.
```bash
aws efs describe-mount-targets --file-system-id <filesystem-id> --profile serverless-bert
```
---
## Deploy & Test the function
In order to deploy the function, we run `serverless deploy --aws-profile serverless-bert`.
```yaml
serverless deploy --aws-profile serverless-bert
```
After this process is done we should see something like this.
![serverless-deploy](/static/blog/new-serverless-bert-with-huggingface-aws-lambda/serverless-deploy.png)
To test our Lambda function we can use Insomnia, Postman, or any other REST client. Just add a JSON with a `context` and
a `question` to the body of your request. Let´s try it with our example from the colab notebook.
```json
{
"context": "We introduce a new language representation model called BERT, which stands for idirectional Encoder Representations from Transformers. Unlike recent language epresentation models (Peters et al., 2018a; Radford et al., 2018), BERT is designed to pretrain deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be finetuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial taskspecific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).",
"question": "What is BERTs best score on Squadv2 ?"
}
```
Our `serverless_pipeline()` answered our question correctly with `83.1`. Also, you can see the complete first request
took 2900ms or 29s. 15 seconds of this used to initialize the model in our function.
![insomnia-request](/static/blog/new-serverless-bert-with-huggingface-aws-lambda/insomnia.png)
The second request took only 390ms.
The best thing is, our BERT model automatically scales up if there are several incoming requests! It scales up to
thousands of parallel requests without any worries.
---
## Conclusion
We have successfully implemented a Serverless Question-Answering API. For Implementation, we used both IaC tools and
"State of the Art" NLP models in a serverless fashion. We reduced the complexity from a developer's perspective but
included a lot of DevOps/MLOps steps. I think it is necessary to include DevOps/MLOps, which handles your deployment and
provisioning if you want to run scalable serverless machine learning in production.
---
You can find the [GitHub repository](https://github.com/philschmid/new-serverless-bert-aws-lambda) with the complete
code [here](https://github.com/philschmid/scale-machine-learning-w-pytorch).
Thanks for reading. If you have any questions, feel free to contact me or comment on this article. You can also connect
with me on [Twitter](https://twitter.com/_philschmid) or
[LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
efsync my first open-source MLOps toolkit | https://www.philschmid.de/efsync-my-first-open-source-mlops-toolkit | 2020-11-04 | [
"Serverless",
"AWS",
"MLOps"
] | efsync is a CLI/SDK tool, which syncs files from S3 or local filesystem automatically to AWS EFS and enables you to install dependencies with the AWS Lambda runtime directly into your EFS filesystem. | Part of using Machine Learning successfully in production is the use of MLOps. MLOps enhances DevOps with continuous
training (CT). The main components of MLOps therefore include continuous integration (CI), continuous delivery (CD), and
continuous training (CT).
[Nvidia wrote an article about what MLOps is in detail.](https://blogs.nvidia.com/blog/2020/09/03/what-is-mlops/)
My Name is Philipp and I live in Nuremberg, Germany. Currently, I am working as a machine learning engineer at a
technology incubation startup. At work, I design and implement cloud-native machine learning architectures for fin-tech
and insurance companies. I am a big fan of Serverless and providing machine learning models in a serverless fashion. I
already wrote two articles about how to use Deep Learning models like BERT in a Serverless Environment like AWS Lambda.
- [Scaling Machine Learning from ZERO to HERO](https://www.philschmid.de/scaling-machine-learning-from-zero-to-hero)
- [Serverless BERT with HuggingFace and AWS Lambda](https://www.philschmid.de/serverless-bert-with-huggingface-and-aws-lambda)
A big hurdle to overcome in serverless machine learning with tools like [AWS Lambda](https://aws.amazon.com/de/lambda/),
[Google Cloud Functions](https://cloud.google.com/functions),
[Azure Functions](https://azure.microsoft.com/de-de/services/functions/) was storage.
[Tensorflow](https://www.tensorflow.org/) and [Pytorch](https://pytorch.org/) are having a huge size and newer "State of
the Art" models like BERT have a size of over 300MB.
In July this year, AWS added support for
[](https://aws.amazon.com/lambda/?nc1=h_ls)[Amazon Elastic File System (EFS)](https://aws.amazon.com/efs/?nc1=h_ls), a
scalable and elastic NFS file system for AWS Lambda. This allows us to mount AWS EFS filesystems to
[AWS Lambda](https://aws.amazon.com/lambda/?nc1=h_ls) functions.
Until today it was very difficult to sync dependencies or model files to an AWS EFS Filesystem. You could do it with
[AWS Datasync](https://docs.aws.amazon.com/efs/latest/ug/gs-step-four-sync-files.html) or you could start an EC2
instance in the same subnet and VPC and upload your files from there.
For this reason, I have built an MLOps toolkit called **efsync**. Efsync is a CLI/SDK tool, which syncs files from S3 or
local filesystem automatically to AWS EFS and enables you to install dependencies with the AWS Lambda runtime directly
into your EFS filesystem. The CLI is easy to use, you only need access to an AWS Account and an AWS EFS-filesystem up
and running.
---
## Architecture
![efsync architecture](/static/blog/efsync-my-first-open-source-mlops-toolkit/efsync.png)
---
## Quick Start
1. **Install via pip3**
```python
pip3 install efsync
```
2. **sync your pip dependencies or files to AWS EFS**
```python
# using the CLI
efsync -cf efsync.yaml
# using the SDK
from efsync import efsync
efsync('efsync.yaml')
```
---
## Use Cases
Efsync covers 5 use cases. On the one hand, it allows you to install the needed dependencies, on the other hand, efsync
helps you to get your models ready, be it via sync from S3 to EFS or a local upload with SCP. I created an example
[Jupyter Notebooks](https://github.com/philschmid/efsync#--examples) for each use case.
The 5 use cases consist of:
- install Python dependencies with the AWS Lambda runtime directly into an EFS filesystem and use them in an AWS Lambda
function. _[Example](https://github.com/philschmid/efsync/blob/main/examples/efsync_pip_packages.ipynb)_
- sync files from S3 to an EFS Filesystem.
_[Example](https://github.com/philschmid/efsync/blob/main/examples/efsync_s3_files.ipynb)_
- upload files with SCP to an EFS Filesystem.
_[Example](https://github.com/philschmid/efsync/blob/main/examples/efsync_scp_files.ipynb)_
- Install Python dependencies and sync from S3 to an EFS Filesystem.
_[Example](https://github.com/philschmid/efsync/blob/main/examples/efsync_pip_packages_and_s3_files.ipynb)_
- Install Python dependencies and uploading files with SCP an EFS Filesystem.
_[Example](https://github.com/philschmid/efsync/blob/main/examples/efsync_pip_packages_and_scp_files.ipynb)_
_**Note:** Each Example can be run in a Google Colab._
---
## Implementation Configuration possibilities
There are 4 different ways to use efsync in your project:
- You can create a `yaml` configuration and use the SDK.
- You can create a python `dict` and use the SDK.
- You can create a `yaml` configuration and use the CLI.
- You can use the CLI with parameters.
You can find examples for each configuration in the
[Github Repository](https://github.com/philschmid/efsync#%EF%B8%8F--configurations). I also included configuration
examples for the different use cases.
_**Note**: If you sync a file with SCP from a local directory (e.g. `model/bert`) to efs (`my_efs_model`) efsync will
sync the model to `my_efs_model/bert` that happens because scp uploads the files recursively._
---
## Examples
The following example shows how to install Python dependencies to the EFS Filesystem and sync files from S3 to the EFS
Filesystem, afterwards. For configuration purpose, we have to create an `efsync.yaml` and a `requirements.txt` file
which holds our dependencies and configuration.
**1. Install efsync**
```python
pip3 install efsync
```
**2. Create a `requirements.txt` with the dependencies**
```python
torch
numpy
```
**3. Create an `efsync.yaml` with all required configuration**
```yaml
#standard configuration
efs_filesystem_id: fs-2226b27a # aws efs filesystem id (moint point)
subnet_Id: subnet-17f97a7d # subnet of which the efs is running in
ec2_key_name: efsync-asd913fjgq3 # required key name for starting the ec2 instance
clean_efs: all # Defines if the EFS should be cleaned up before. values: `'all'`,`'pip'`,`'file'` uploading
# aws profile configuration
aws_profile: schueler-vz # aws iam profile with required permission configured in .aws/credentials
aws_region: eu-central-1 # the aws region where the efs is running
# pip packages configurations
efs_pip_dir: lib # pip directory on ec2
python_version: 3.8 # python version used for installing pip packages -> should be used as lambda runtime afterwads
requirements: requirements.txt # path + file to requirements.txt which holds the installable pip packages
# s3 config
s3_bucket: efsync-test-bucket # s3 bucket name from files should be downloaded
s3_keyprefix: distilbert # s3 keyprefix for the files
file_dir_on_ec2: ml # Name of the directory where your S3 files will be saved
```
The `efsync.yaml` contains all configuration, such as:
**Standard Configuration**
- `efs_filesystem_id`: the AWS EFS filesystem id (mount point).
- `subnet_Id`: the Subnet Id of the EFS filesystem, which is running in.
- `ec2_key_name`: A required key name for starting the EC2 instance.
- `aws_profile`: the IAM profile with required permission configured in `.aws/credentials`.
- `aws_region`: the AWS region where the EFS filesystem is running.
**Pip Dependencies Configurations**
- `efs_pip_dir`: the pip directory on EC2, where dependencies will be installed.
- `python_version`**:** Python version used for installing pip packages -> should be used as lambda runtime.
- `requirements`: Path + file to requirements.txt which holds the installable pip dependencies.
**S3 Configurations**
- `s3_bucket`: S3 bucket name from files should be downloaded.
- `s3_keyprefix`: S3 keyprefix for the directory/files
- `file_dir_on_ec2`: Name of the directory where your S3 files will be saved
**4. Run efsync wit `efsync.yaml`**
```python
from efsync import efsync
efsync('./efsync.yaml')
#--------------------------Result--------------------------#
#2020-10-25 20:12:33,747 - efsync - starting....
#2020-10-25 20:12:33,748 - efsync - loading config
#2020-10-25 20:12:33,772 - efsync - creating security group
#2020-10-25 20:12:34,379 - efsync - loading default security group
#2020-10-25 20:12:39,444 - efsync - creating ssh key for scp in memory
#2020-10-25 20:12:40,005 - efsync - starting ec2 instance with security group sg-0ff6539317d7e48da and subnet_Id subnet-17f97a7d
#2020-10-25 20:18:46,430 - efsync - stopping ec2 instance with instance id i-020e3f3cc4b3d690b
#2020-10-25 20:19:17,159 - efsync - deleting iam profile
#2020-10-25 20:19:18,354 - efsync - deleting ssh key
#2020-10-25 20:19:18,604 - efsync - deleting security group
#2020-10-25 20:19:18,914 - efsync - #################### finished after 6.752833333333333 minutes ####################
```
---
## Summary
With efsync you can easily sync files from S3 or local filesystem automatically to AWS EFS and enables you to install
dependencies with the AWS Lambda runtime directly into your EFS filesystem. Installing and syncing files from S3 takes
around 6 minutes, only installing dependencies around 4–5 minutes and only syncing files around 2 minutes.
---
Thanks for reading. If you have any questions, feel free to contact me or comment on this article. You can also connect
with me on [Twitter](https://twitter.com/_philschmid) or
[LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/).
You can find the library [on Github](https://github.com/philschmid/efsync). Feel free to create Pull Request or Issues
if you have any questions or improvements. |
Creating document embeddings with Hugging Face's Transformers & Amazon SageMaker | https://www.philschmid.de/custom-inference-huggingface-sagemaker | 2022-03-08 | [
"HuggingFace",
"AWS",
"BERT",
"SageMaker"
] | Learn how to use a custom Inference script for creating document embeddings with Hugging Face's Transformers, Amazon SageMaker, and Sentence Transformers. | Welcome to this getting started guide. We will use the Hugging Face Inference DLCs and Amazon SageMaker Python SDK to create a [real-time inference endpoint](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints.html) running a Sentence Transformers for document embeddings. Currently, the [SageMaker Hugging Face Inference Toolkit](https://github.com/aws/sagemaker-huggingface-inference-toolkit) supports the [pipeline feature](https://huggingface.co/transformers/main_classes/pipelines.html) from Transformers for zero-code deployment. This means you can run compatible Hugging Face Transformer models without providing pre- & post-processing code. Therefore we only need to provide an environment variable `HF_TASK` and `HF_MODEL_ID` when creating our endpoint and the Inference Toolkit will take care of it. This is a great feature if you are working with existing [pipelines](https://huggingface.co/transformers/main_classes/pipelines.html).
If you want to run other tasks, such as creating document embeddings, you can the pre- and post-processing code yourself, via an `inference.py` script. The Hugging Face Inference Toolkit allows the user to override the default methods of the `HuggingFaceHandlerService`.
The custom module can override the following methods:
- `model_fn(model_dir)` overrides the default method for loading a model. The return value `model` will be used in the`predict_fn` for predictions.
- `model_dir` is the the path to your unzipped `model.tar.gz`.
- `input_fn(input_data, content_type)` overrides the default method for pre-processing. The return value `data` will be used in `predict_fn` for predictions. The inputs are:
- `input_data` is the raw body of your request.
- `content_type` is the content type from the request header.
- `predict_fn(processed_data, model)` overrides the default method for predictions. The return value `predictions` will be used in `output_fn`.
- `model` returned value from `model_fn` methond
- `processed_data` returned value from `input_fn` method
- `output_fn(prediction, accept)` overrides the default method for post-processing. The return value `result` will be the response to your request (e.g.`JSON`). The inputs are:
- `predictions` is the result from `predict_fn`.
- `accept` is the return accept type from the HTTP Request, e.g. `application/json`.
In this example are we going to use Sentence Transformers to create sentence embeddings using a mean pooling layer on the raw representation.
_NOTE: You can run this demo in Sagemaker Studio, your local machine, or Sagemaker Notebook Instances_
## Development Environment and Permissions
### Installation
```python
%pip install sagemaker --upgrade
import sagemaker
assert sagemaker.__version__ >= "2.75.0"
```
Install `git` and `git-lfs`
```python
# For notebook instances (Amazon Linux)
!sudo yum update -y
!curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.rpm.sh | sudo bash
!sudo yum install git-lfs git -y
# For other environments (Ubuntu)
!sudo apt-get update -y
!curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash
!sudo apt-get install git-lfs git -y
```
### Permissions
_If you are going to use Sagemaker in a local environment (not SageMaker Studio or Notebook Instances). You need access to an IAM Role with the required permissions for Sagemaker. You can find [here](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) more about it._
```python
import sagemaker
import boto3
sess = sagemaker.Session()
# sagemaker session bucket -> used for uploading data, models and logs
# sagemaker will automatically create this bucket if it not exists
sagemaker_session_bucket=None
if sagemaker_session_bucket is None and sess is not None:
# set to default bucket if a bucket name is not given
sagemaker_session_bucket = sess.default_bucket()
try:
role = sagemaker.get_execution_role()
except ValueError:
iam = boto3.client('iam')
role = iam.get_role(RoleName='sagemaker_execution_role')['Role']['Arn']
sess = sagemaker.Session(default_bucket=sagemaker_session_bucket)
print(f"sagemaker role arn: {role}")
print(f"sagemaker bucket: {sess.default_bucket()}")
print(f"sagemaker session region: {sess.boto_region_name}")
```
## Create custom an `inference.py` script
To use the custom inference script, you need to create an `inference.py` script. In our example, we are going to overwrite the `model_fn` to load our sentence transformer correctly and the `predict_fn` to apply mean pooling.
We are going to use the [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) model. It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
```python
!mkdir code
```
```python
%%writefile code/inference.py
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
# Helper: Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
def model_fn(model_dir):
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained(model_dir)
model = AutoModel.from_pretrained(model_dir)
return model, tokenizer
def predict_fn(data, model_and_tokenizer):
# destruct model and tokenizer
model, tokenizer = model_and_tokenizer
# Tokenize sentences
sentences = data.pop("inputs", data)
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
# return dictonary, which will be json serializable
return {"vectors": sentence_embeddings[0].tolist()}
```
## Create `model.tar.gz` with inference script and model
To use our `inference.py` we need to bundle it into a `model.tar.gz` archive with all our model-artifcats, e.g. `pytorch_model.bin`. The `inference.py` script will be placed into a `code/` folder. We will use `git` and `git-lfs` to easily download our model from hf.co/models and upload it to Amazon S3 so we can use it when creating our SageMaker endpoint.
```python
repository = "sentence-transformers/all-MiniLM-L6-v2"
model_id=repository.split("/")[-1]
s3_location=f"s3://{sess.default_bucket()}/custom_inference/{model_id}/model.tar.gz"
```
1. Download the model from hf.co/models with `git clone`.
```python
!git lfs install
!git clone https://huggingface.co/$repository
```
1. copy `inference.py` into the `code/` directory of the model directory.
```python
!cp -r code/ $model_id/code/
```
3. Create a `model.tar.gz` archive with all the model artifacts and the `inference.py` script.
```python
%cd $model_id
!tar zcvf model.tar.gz *
```
1. Upload the `model.tar.gz` to Amazon S3:
```python
!aws s3 cp model.tar.gz $s3_location
# upload: ./model.tar.gz to s3://sagemaker-us-east-1-558105141721/custom_inference/all-MiniLM-L6-v2/model.tar.gz
```
## Create custom `HuggingfaceModel`
After we have created and uploaded our `model.tar.gz` archive to Amazon S3. Can we create a custom `HuggingfaceModel` class. This class will be used to create and deploy our SageMaker endpoint.
```python
from sagemaker.huggingface.model import HuggingFaceModel
# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
model_data=s3_location, # path to your model and script
role=role, # iam role with permissions to create an Endpoint
transformers_version="4.12", # transformers version used
pytorch_version="1.9", # pytorch version used
py_version='py38', # python version used
)
# deploy the endpoint endpoint
predictor = huggingface_model.deploy(
initial_instance_count=1,
instance_type="ml.g4dn.xlarge"
)
```
## Request Inference Endpoint using the `HuggingfacePredictor`
The `.deploy()` returns an `HuggingFacePredictor` object which can be used to request inference.
```python
data = {
"inputs": "the mesmerizing performances of the leads keep the film grounded and keep the audience riveted .",
}
res = predictor.predict(data=data)
print(res)
# {'vectors': [0.005078191868960857, -0.0036594511475414038, .....]}
```
### Delete model and endpoint
To clean up, we can delete the model and endpoint.
```python
predictor.delete_model()
predictor.delete_endpoint()
```
## Conclusion
We managed to `inference.py` provide a custom inference script to overwrite default methods for model loading and running inference. This allowed us to use Sentence Transformers models for creating sentence embeddings with minimal code changes.
Custom Inference scripts are an easy and nice way to customize the inference pipeline of the Hugging Face Inference Toolkit when your pipeline is not represented in the [pipelines](https://huggingface.co/transformers/main_classes/pipelines.html) API of Transformers or when you want to add custom logic.
---
Thanks for reading! If you have any questions, feel free to contact me, through [Github](https://github.com/huggingface/transformers), or on the [forum](https://discuss.huggingface.co/c/sagemaker/17). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Deploy GPT-J 6B for inference using Hugging Face Transformers and Amazon SageMaker | https://www.philschmid.de/deploy-gptj-sagemaker | 2022-01-11 | [
"HuggingFace",
"AWS",
"SageMaker",
"GPTJ"
] | Learn how to deploy EleutherAIs GPT-J 6B for inference using Hugging Face Transformers and Amazon SageMaker. | Almost 6 months ago to the day, [EleutherAI](https://www.eleuther.ai/) released [GPT-J 6B](https://huggingface.co/EleutherAI/gpt-j-6B), an open-source alternative to [OpenAIs GPT-3](https://openai.com/blog/gpt-3-apps/). [GPT-J 6B](https://huggingface.co/EleutherAI/gpt-j-6B) is the 6 billion parameter successor to [EleutherAIs](https://www.eleuther.ai/) GPT-NEO family, a family of transformer-based language models based on the GPT architecture for text generation.
[EleutherAI](https://www.eleuther.ai/)'s primary goal is to train a model that is equivalent in size to GPT-3 and make it available to the public under an open license.
Over the last 6 months, `GPT-J` gained a lot of interest from Researchers, Data Scientists, and even Software Developers, but it remained very challenging to deploy `GPT-J` into production for real-world use cases and products.
There are some hosted solutions to use `GPT-J` for production workloads, like the [Hugging Face Inference API](https://huggingface.co/inference-api), or for experimenting using [EleutherAIs 6b playground](https://6b.eleuther.ai/), but fewer examples on how to easily deploy it into your own environment.
In this blog post, you will learn how to easily deploy `GPT-J` using [Amazon SageMaker](https://aws.amazon.com/de/sagemaker/) and the [Hugging Face Inference Toolkit](https://github.com/aws/sagemaker-huggingface-inference-toolkit) with a few lines of code for scalable, reliable, and secure real-time inference using a regular size GPU instance with NVIDIA T4 (~500$/m).
But before we get into it, I want to explain why deploying `GPT-J` into production is challenging.
---
## Background
The weights of the 6 billion parameter model represent a ~24GB memory footprint. To load it in float32, one would need at least 2x model size CPU RAM: 1x for initial weights and another 1x to load the checkpoint. So for `GPT-J` it would require at least 48GB of CPU RAM to just load the model.
To make the model more accessible, [EleutherAI](https://www.eleuther.ai/) also provides float16 weights, and `transformers` has new options to reduce the memory footprint when loading large language models. Combining all this it should take roughly 12.1GB of CPU RAM to load the model.
```python
from transformers import GPTJForCausalLM
import torch
model = GPTJForCausalLM.from_pretrained(
"EleutherAI/gpt-j-6B",
revision="float16",
torch_dtype=torch.float16,
low_cpu_mem_usage=True
)
```
The caveat of this example is that it takes a very long time until the model is loaded into memory and ready for use. In my experiments, it took `3 minutes and 32 seconds` to load the model with the code snippet above on a `P3.2xlarge` AWS EC2 instance (the model was not stored on disk). This duration can be reduced by storing the model already on disk, which reduces the load time to `1 minute and 23 seconds`, which is still very long for production workloads where you need to consider scaling and reliability.
For example, Amazon SageMaker has a [60s limit for requests to respond](https://docs.aws.amazon.com/general/latest/gr/sagemaker.html#sagemaker_region), meaning the model needs to be loaded and the predictions to run within 60s, which in my opinion makes a lot of sense to keep the model/endpoint scalable and reliable for your workload. If you have longer predictions, you could use [batch-transform](https://docs.aws.amazon.com/sagemaker/latest/dg/batch-transform.html).
In [Transformers](https://github.com/huggingface/transformers) the models loaded with the `from_pretrained` method are following PyTorch's [recommended practice](https://pytorch.org/tutorials/beginner/saving_loading_models.html#save-load-state-dict-recommended), which takes around `1.97 seconds` for BERT [[REF]](https://colab.research.google.com/drive/1-Y5f8PWS8ksoaf1A2qI94jq0GxF2pqQ6?usp=sharing). PyTorch offers an [additional alternative way of saving and loading models](https://pytorch.org/tutorials/beginner/saving_loading_models.html#save-load-entire-model) using `torch.save(model, PATH)` and `torch.load(PATH)`.
_“Saving a model in this way will save the entire module using Python’s [pickle](https://docs.python.org/3/library/pickle.html) module. The disadvantage of this approach is that the serialized data is bound to the specific classes and the exact directory structure used when the model is saved.”_
This means that when we save a model with `transformers==4.13.2` it could be potentially incompatible when trying to load with `transformers==4.15.0`. However, loading models this way reduces the loading time by **~12x,** down to `0.166s` for BERT.
Applying this to `GPT-J` means that we can reduce the loading time from `1 minute and 23 seconds` down to `7.7 seconds`, which is ~10.5x faster.
![model load time](/static/blog/deploy-gptj-sagemaker/model_load_time.png)
## Tutorial
With this method of saving and loading models, we achieved model loading performance for `GPT-J` compatible with production scenarios. But we need to keep in mind that we need to align:
> Align PyTorch and Transformers version when saving the model with `torch.save(model,PATH)` and loading the model with `torch.load(PATH)` to avoid incompatibility.
### Save `GPT-J` using `torch.save`
To create our `torch.load()` compatible model file we load `GPT-J` using Transformers and the `from_pretrained` method, and then save it with `torch.save()`.
```python
from transformers import AutoTokenizer,GPTJForCausalLM
import torch
# load fp 16 model
model = GPTJForCausalLM.from_pretrained("EleutherAI/gpt-j-6B", revision="float16", torch_dtype=torch.float16)
# save model with torch.save
torch.save(model, "gptj.pt")
```
Now we are able to load our `GPT-J` model with `torch.load()` to run predictions.
```python
from transformers import pipeline
import torch
# load model
model = torch.load("gptj.pt")
# load tokenizer
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")
# create pipeline
gen = pipeline("text-generation",model=model,tokenizer=tokenizer,device=0)
# run prediction
gen("My Name is philipp")
#[{'generated_text': 'My Name is philipp k. and I live just outside of Detroit....
```
---
### Create `model.tar.gz` for the Amazon SageMaker real-time endpoint
Since we can load our model quickly and run inference on it let’s deploy it to Amazon SageMaker.
There are two ways you can deploy transformers to Amazon SageMaker. You can either [“Deploy a model from the Hugging Face Hub”](https://huggingface.co/docs/sagemaker/inference#deploy-a-model-from-the-%F0%9F%A4%97-hub) directly or [“Deploy a model with `model_data` stored on S3”](https://huggingface.co/docs/sagemaker/inference#deploy-with-model_data). Since we are not using the default Transformers method we need to go with the second option and deploy our endpoint with the model stored on S3.
For this, we need to create a `model.tar.gz` artifact containing our model weights and additional files we need for inference, e.g. `tokenizer.json`.
**We provide uploaded and publicly accessible `model.tar.gz` artifacts, which can be used with the `HuggingFaceModel` to deploy `GPT-J` to Amazon SageMaker.**
If you still want or need to create your own `model.tar.gz`, e.g. because of compliance guidelines, you can use the helper script [convert_gpt.py](https://github.com/philschmid/amazon-sagemaker-gpt-j-sample/blob/main/convert_gptj.py) for this purpose, which creates the `model.tar.gz` and uploads it to S3.
```bash
# clone directory
git clone https://github.com/philschmid/amazon-sagemaker-gpt-j-sample.git
# change directory to amazon-sagemaker-gpt-j-sample
cd amazon-sagemaker-gpt-j-sample
# create and upload model.tar.gz
pip3 install -r requirements.txt
python3 convert_gptj.py --bucket_name {model_storage}
```
The `convert_gpt.py` should print out an S3 URI similar to this. `s3://hf-sagemaker-inference/gpt-j/model.tar.gz`.
### Deploy `GPT-J` as Amazon SageMaker Endpoint
To deploy our Amazon SageMaker Endpoint we are going to use the [Amazon SageMaker Python SDK](https://sagemaker.readthedocs.io/en/stable/) and the `HuggingFaceModel` class.
The snippet below uses the `get_execution_role` which is only available inside Amazon SageMaker Notebook Instances or Studio. If you want to deploy a model outside of it check [the documentation](https://huggingface.co/docs/sagemaker/train#installation-and-setup#).
The `model_uri` defines the location of our `GPT-J` model artifact. We are going to use the publicly available one provided by us.
```python
from sagemaker.huggingface import HuggingFaceModel
import sagemaker
# IAM role with permissions to create endpoint
role = sagemaker.get_execution_role()
# public S3 URI to gpt-j artifact
model_uri="s3://huggingface-sagemaker-models/transformers/4.12.3/pytorch/1.9.1/gpt-j/model.tar.gz"
# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
model_data=model_uri,
transformers_version='4.12.3',
pytorch_version='1.9.1',
py_version='py38',
role=role,
)
# deploy model to SageMaker Inference
predictor = huggingface_model.deploy(
initial_instance_count=1, # number of instances
instance_type='ml.g4dn.xlarge' #'ml.p3.2xlarge' # ec2 instance type
)
```
If you want to use your own `model.tar.gz` just replace the `model_uri` with your S3 Uri.
The deployment should take around 3-5 minutes.
### Run predictions
We can run predictions using the `predictor` instances created by our `.deploy` method. To send a request to our endpoint we use the `predictor.predict` with our `inputs`.
```python
predictor.predict({
"inputs": "Can you please let us know more details about your "
})
```
If you want to customize your predictions using additional `kwargs` like `min_length`, check out “Usage best practices” below.
## Usage best practices
When using generative models, most of the time you want to configure or customize your prediction to fit your needs, for example by using beam search, configuring the max or min length of the generated sequence, or adjust the temperature to reduce repetition. The Transformers library provides different strategies and `kwargs` to do this, the Hugging Face Inference toolkit offers the same functionality using the `parameters` attribute of your request payload. Below you can find examples on how to generate text without parameters, with beam search, and using custom configurations. If you want to learn about different decoding strategies check out this [blog post](https://huggingface.co/blog/how-to-generate).
### Default request
This is an example of a default request using `greedy` search.
Inference-time after the first request: `3s`
```python
predictor.predict({
"inputs": "Can you please let us know more details about your "
})
```
### Beam search request
This is an example of a request using `beam` search with 5 beams.
Inference-time after the first request: `3.3s`
```python
predictor.predict({
"inputs": "Can you please let us know more details about your ",
"parameters" : {
"num_beams": 5,
}
})
```
### Parameterized request
This is an example of a request using a custom parameter, e.g. `min_length` for generating at least 512 tokens.
Inference-time after the first request: `38s`
```python
predictor.predict({
"inputs": "Can you please let us know more details about your ",
"parameters" : {
"max_length": 512,
"temperature": 0.9,
}
})
```
### Few-Shot example (advanced)
This is an example of how you could `eos_token_id` to stop the generation on a certain token, e.g. `\n` ,`.` or `###` for few-shot predictions. Below is a few-shot example for generating tweets for keywords.
Inference-time after the first request: `15-45s`
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")
end_sequence="###"
temperature=4
max_generated_token_length=25
prompt= """key: markets
tweet: Take feedback from nature and markets, not from people.
###
key: children
tweet: Maybe we die so we can come back as children.
###
key: startups
tweet: Startups shouldn’t worry about how to put out fires, they should worry about how to start them.
###
key: hugging face
tweet:"""
predictor.predict({
'inputs': prompt,
"parameters" : {
"max_length": int(len(prompt) + max_generated_token_length),
"temperature": float(temperature),
"eos_token_id": int(tokenizer.convert_tokens_to_ids(end_sequence)),
"return_full_text":False
}
})
```
---
To delete your endpoint you can run.
```python
predictor.delete_endpoint()
predictor.delete_model()
```
## Conclusion
We successfully managed to deploy `GPT-J`, a 6 billion parameter language model created by [EleutherAI](https://www.eleuther.ai/), using Amazon SageMaker. We reduced the model load time from 3.5 minutes down to 8 seconds to be able to run scalable, reliable inference.
Remember that using `torch.save()` and `torch.load()` can create incompatibility issues. If you want to learn more about scaling out your Amazon SageMaker Endpoints check out my other blog post: [“MLOps: End-to-End Hugging Face Transformers with the Hub & SageMaker Pipelines”](https://www.philschmid.de/mlops-sagemaker-huggingface-transformers).
---
Thanks for reading! If you have any question, feel free to contact me, through [Github](https://github.com/huggingface/transformers), or on the [forum](https://discuss.huggingface.co/c/sagemaker/17). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Accelerate BERT inference with DeepSpeed-Inference on GPUs | https://www.philschmid.de/bert-deepspeed-inference | 2022-08-16 | [
"BERT",
"DeepSpeed",
"HuggingFace",
"Optimization"
] | Learn how to optimize BERT for GPU inference with a 1-line of code using Hugging Face Transformers and DeepSpeed. | In this session, you will learn how to optimize Hugging Face Transformers models for GPU inference using [DeepSpeed-Inference](https://www.deepspeed.ai/tutorials/inference-tutorial/). The session will show you how to apply state-of-the-art optimization techniques using [DeepSpeed-Inference](https://www.deepspeed.ai/tutorials/inference-tutorial/).
This session will focus on single GPU inference on BERT and RoBERTa models.
By the end of this session, you will know how to optimize your Hugging Face Transformers models (BERT, RoBERTa) using DeepSpeed-Inference. We are going to optimize a BERT large model for token classification, which was fine-tuned on the conll2003 dataset to decrease the latency from 30ms to 10ms for a sequence length of 128.
You will learn how to:
- [Quick Intro: What is DeepSpeed-Inference](#quick-intro-what-is-deepspeed-inference)
- [1. Setup Development Environment](#1-setup-development-environment)
- [2. Load vanilla BERT model and set baseline](#2-load-vanilla-bert-model-and-set-baseline)
- [3. Optimize BERT for GPU using DeepSpeed `InferenceEngine`](#3-optimize-bert-for-gpu-using-deepspeed-inferenceengine)
- [4. Evaluate the performance and speed](#4-evaluate-the-performance-and-speed)
- [Conclusion](#conclusion)
Let's get started! 🚀
_This tutorial was created and run on a g4dn.xlarge AWS EC2 Instance including an NVIDIA T4._
---
## Quick Intro: What is DeepSpeed-Inference
[DeepSpeed-Inference](https://www.deepspeed.ai/tutorials/inference-tutorial/) is an extension of the [DeepSpeed](https://www.deepspeed.ai/) framework focused on inference workloads. [DeepSpeed Inference](https://www.deepspeed.ai/#deepspeed-inference) combines model parallelism technology such as tensor, pipeline-parallelism, with custom optimized cuda kernels.
DeepSpeed provides a seamless inference mode for compatible transformer based models trained using DeepSpeed, Megatron, and HuggingFace. For a list of compatible models please see [here](https://github.com/microsoft/DeepSpeed/blob/master/deepspeed/module_inject/replace_policy.py).
As mentioned DeepSpeed-Inference integrates model-parallelism techniques allowing you to run multi-GPU inference for LLM, like [BLOOM](https://huggingface.co/bigscience/bloom) with 176 billion parameters.
If you want to learn more about DeepSpeed inference:
- [Paper: DeepSpeed Inference: Enabling Efficient Inference of Transformer Models at Unprecedented Scale](https://arxiv.org/pdf/2207.00032.pdf)
- [Blog: Accelerating large-scale model inference and training via system optimizations and compression](https://www.microsoft.com/en-us/research/blog/deepspeed-accelerating-large-scale-model-inference-and-training-via-system-optimizations-and-compression/)
## 1. Setup Development Environment
Our first step is to install Deepspeed, along with PyTorch, Transfromers and some other libraries. Running the following cell will install all the required packages.
_Note: You need a machine with a GPU and a compatible CUDA installed. You can check this by running `nvidia-smi` in your terminal. If your setup is correct, you should get statistics about your GPU._
```python
!pip install torch==1.11.0 torchvision==0.12.0 --extra-index-url https://download.pytorch.org/whl/cu113 --upgrade -q
!pip install deepspeed==0.7.0 --upgrade -q
!pip install transformers[sentencepiece]==4.21.1 --upgrade -q
!pip install datasets evaluate[evaluator]==0.2.2 seqeval --upgrade -q
```
Before we start. Let's make sure all packages are installed correctly.
```python
import re
import torch
# check deepspeed installation
report = !python3 -m deepspeed.env_report
r = re.compile('.*ninja.*OKAY.*')
assert any(r.match(line) for line in report) == True, "DeepSpeed Inference not correct installed"
# check cuda and torch version
torch_version, cuda_version = torch.__version__.split("+")
torch_version = ".".join(torch_version.split(".")[:2])
cuda_version = f"{cuda_version[2:4]}.{cuda_version[4:]}"
r = re.compile(f'.*torch.*{torch_version}.*')
assert any(r.match(line) for line in report) == True, "Wrong Torch version"
r = re.compile(f'.*cuda.*{cuda_version}.*')
assert any(r.match(line) for line in report) == True, "Wrong Cuda version"
```
## 2. Load vanilla BERT model and set baseline
After we set up our environment, we create a baseline for our model. We use the [dslim/bert-large-NER](https://huggingface.co/dslim/bert-large-NER), a fine-tuned BERT-large model on the English version of the standard [CoNLL-2003 Named Entity Recognition](https://www.aclweb.org/anthology/W03-0419.pdf) dataset achieving an f1 score `95.7%`.
To create our baseline, we load the model with `transformers` and create a `token-classification` pipeline.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification, pipeline
# Model Repository on huggingface.co
model_id="dslim/bert-large-NER"
# Load Model and Tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForTokenClassification.from_pretrained(model_id)
# Create a pipeline for token classification
token_clf = pipeline("token-classification", model=model, tokenizer=tokenizer,device=0)
# Test pipeline
example = "My name is Wolfgang and I live in Berlin"
ner_results = token_clf(example)
print(ner_results)
# [{'entity': 'B-PER', 'score': 0.9971501, 'index': 4, 'word': 'Wolfgang', 'start': 11, 'end': 19}, {'entity': 'B-LOC', 'score': 0.9986046, 'index': 9, 'word': 'Berlin', 'start': 34, 'end': 40}]
```
Create a Baseline with `evaluate` using the `evaluator` and the `conll2003` dataset. The Evaluator class allows us to evaluate a model/pipeline on a dataset using a defined metric.
```python
from evaluate import evaluator
from datasets import load_dataset
# load eval dataset
eval_dataset = load_dataset("conll2003", split="validation")
# define evaluator
task_evaluator = evaluator("token-classification")
# run baseline
results = task_evaluator.compute(
model_or_pipeline=token_clf,
data=eval_dataset,
metric="seqeval",
)
print(f"Overall f1 score for our model is {results['overall_f1']*100:.2f}%")
print(f"The avg. Latency of the model is {results['latency_in_seconds']*1000:.2f}ms")
# Overall f1 score for our model is 95.76
# The avg. Latency of the model is 18.70ms
```
Our model achieves an f1 score of `95.8%` on the CoNLL-2003 dataset with an average latency across the dataset of `18.9ms`.
## 3. Optimize BERT for GPU using DeepSpeed `InferenceEngine`
The next and most important step is to optimize our model for GPU inference. This will be done using the DeepSpeed `InferenceEngine`. The `InferenceEngine` is initialized using the `init_inference` method. The `init_inference` method expects as parameters atleast:
- `model`: The model to optimize.
- `mp_size`: The number of GPUs to use.
- `dtype`: The data type to use.
- `replace_with_kernel_inject`: Whether inject custom kernels.
You can find more information about the `init_inference` method in the [DeepSpeed documentation](https://deepspeed.readthedocs.io/en/latest/inference-init.html) or [thier inference blog](https://www.deepspeed.ai/tutorials/inference-tutorial/).
```python
import torch
from transformers import AutoTokenizer, AutoModelForTokenClassification,pipeline
from transformers import pipeline
from deepspeed.module_inject import HFBertLayerPolicy
import deepspeed
# Model Repository on huggingface.co
model_id="dslim/bert-large-NER"
# load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForTokenClassification.from_pretrained(model_id)
# init deepspeed inference engine
ds_model = deepspeed.init_inference(
model=model, # Transformers models
mp_size=1, # Number of GPU
dtype=torch.half, # dtype of the weights (fp16)
# injection_policy={"BertLayer" : HFBertLayerPolicy}, # replace BertLayer with DS HFBertLayerPolicy
replace_method="auto", # Lets DS autmatically identify the layer to replace
replace_with_kernel_inject=True, # replace the model with the kernel injector
)
# create acclerated pipeline
ds_clf = pipeline("token-classification", model=ds_model, tokenizer=tokenizer,device=0)
# Test pipeline
example = "My name is Wolfgang and I live in Berlin"
ner_results = ds_clf(example)
print(ner_results)
```
We can now inspect our model graph to see that the vanilla `BertLayer` has been replaced with an `HFBertLayer`, which includes the `DeepSpeedTransformerInference` module, a custom `nn.Module` that is optimized for inference by the DeepSpeed Team.
```python
InferenceEngine(
(module): BertForTokenClassification(
(bert): BertModel(
(embeddings): BertEmbeddings(
(word_embeddings): Embedding(28996, 1024, padding_idx=0)
(position_embeddings): Embedding(512, 1024)
(token_type_embeddings): Embedding(2, 1024)
(LayerNorm): LayerNorm((1024,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(encoder): BertEncoder(
(layer): ModuleList(
(0): DeepSpeedTransformerInference(
(attention): DeepSpeedSelfAttention()
(mlp): DeepSpeedMLP()
)
```
we can validate this with a simple `assert`.
```python
from deepspeed.ops.transformer.inference import DeepSpeedTransformerInference
assert isinstance(ds_model.module.bert.encoder.layer[0], DeepSpeedTransformerInference) == True, "Model not sucessfully initalized"
```
Now, lets run the same evaluation as for our baseline transformers model.
```python
# run baseline
ds_results = task_evaluator.compute(
model_or_pipeline=ds_clf,
data=eval_dataset,
metric="seqeval",
)
print(f"Overall f1 score for our model is {ds_results['overall_f1']*100:.2f}%")
print(f"The avg. Latency of the model is {ds_results['latency_in_seconds']*1000:.2f}ms")
# Overall f1 score for our model is 95.64
# The avg. Latency of the model is 9.33ms
```
Our DeepSpeed model achieves an f1 score of `95.6%` on the CoNLL-2003 dataset with an average latency across the dataset of `9.33ms`.
## 4. Evaluate the performance and speed
As the last step, we want to take a detailed look at the performance and accuracy of our optimized model. Applying optimization techniques, like graph optimizations or mixed-precision, not only impact performance (latency) those also might have an impact on the accuracy of the model. So accelerating your model comes with a trade-off.
In our example, did we achieve on the `conll2003` evaluation dataset an f1 score of `95.8%` with an average latency of `18.9ms` for the vanilla model and for our optimized model an f1 score of `95.6%` with an average latency of `9.33ms`.
```python
print(f"The optimized ds-model achieves {round(ds_results['overall_f1']/results['overall_f1'],4)*100:.2f}% accuracy of the vanilla transformers model.")
```
The optimized ds-model achieves `99.88%` accuracy of the vanilla transformers model.
Now let's test the performance (latency) of our optimized model. We will use a payload with a sequence length of 128 for the benchmark. To keep it simple, we will use a python loop and calculate the avg, mean & p95 latency for our vanilla model and the optimized model.
```python
from time import perf_counter
import numpy as np
payload="Hello my name is Philipp. I am getting in touch with you because i didn't get a response from you. What do I need to do to get my new card which I have requested 2 weeks ago? Please help me and answer this email in the next 7 days. Best regards and have a nice weekend "*2
print(f'Payload sequence length is: {len(tokenizer(payload)["input_ids"])}')
def measure_latency(pipe):
latencies = []
# warm up
for _ in range(10):
_ = pipe(payload)
# Timed run
for _ in range(300):
start_time = perf_counter()
_ = pipe(payload)
latency = perf_counter() - start_time
latencies.append(latency)
# Compute run statistics
time_avg_ms = 1000 * np.mean(latencies)
time_std_ms = 1000 * np.std(latencies)
time_p95_ms = 1000 * np.percentile(latencies,95)
return f"P95 latency (ms) - {time_p95_ms}; Average latency (ms) - {time_avg_ms:.2f} +\- {time_std_ms:.2f};", time_p95_ms
vanilla_model=measure_latency(token_clf)
ds_opt_model=measure_latency(ds_clf)
print(f"Vanilla model: {vanilla_model[0]}")
print(f"Optimized model: {ds_opt_model[0]}")
print(f"Improvement through optimization: {round(vanilla_model[1]/ds_opt_model[1],2)}x")
# Payload sequence length is: 128
# Vanilla model: P95 latency (ms) - 30.401047450277474; Average latency (ms) - 29.68 +\- 0.54;
# Optimized model: P95 latency (ms) - 10.401162500056671; Average latency (ms) - 10.10 +\- 0.17;
# Improvement through optimization: 2.92x
```
We managed to accelerate the `BERT-Large` model latency from `30.4ms` to `10.40ms` or 2.92x for sequence length of 128.
![bert-latency](/static/blog/bert-deepspeed-inference/ds-bert-latency.png)
## Conclusion
We successfully optimized our BERT-large Transformers with DeepSpeed-inference and managed to decrease our model latency from 30.4ms to 10.4ms or 2.92x while keeping 99.88% of the model accuracy.
The results are impressive, but applying the optimization was as easy as adding one additional call to `deepspeed.init_inference`.
But I have to say that this isn't a plug-and-play process you can transfer to any Transformers model, task, or dataset. Also, make sure to check if your model is compatible with DeepSpeed-Inference.
---
Thanks for reading! If you have any questions, feel free to contact me, through [Github](https://github.com/huggingface/transformers), or on the [forum](https://discuss.huggingface.co/c/optimum/59). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Autoscaling BERT with Hugging Face Transformers, Amazon SageMaker and Terraform module | https://www.philschmid.de/terraform-huggingface-amazon-sagemaker-advanced | 2022-03-01 | [
"HuggingFace",
"AWS",
"BERT",
"Terraform"
] | Learn how to apply autoscaling to Hugging Face Transformers and Amazon SageMaker using Terraform. | A Few weeks ago we released a Terraform module [sagemaker-huggingface](https://registry.terraform.io/modules/philschmid/sagemaker-huggingface/aws/latest), which makes it super easy to deploy Hugging Face Transformers like BERT from Amazon S3 or the [Hugging Face Hub](http://hf.co/models) to Amazon SageMake for real-time inference.
```python
module "sagemaker-huggingface" {
source = "philschmid/sagemaker-huggingface/aws"
version = "0.5.0"
name_prefix = "distilbert"
pytorch_version = "1.9.1"
transformers_version = "4.12.3"
instance_type = "ml.g4dn.xlarge"
hf_model_id = "distilbert-base-uncased-finetuned-sst-2-english"
hf_task = "text-classification"
autoscaling = {
max_capacity = 4 # The max capacity of the scalable target
scaling_target_invocations = 200 # The scaling target invocations (requests/minute)
}
}
```
You should check out the [“Deploy BERT with Hugging Face Transformers, Amazon SageMaker and Terraform module”](https://www.philschmid.de/terraform-huggingface-amazon-sagemaker) blog post if you want to know more about [Terraform](https://www.terraform.io/intro) and how we have built the module.
**TL;DR;** this module should enable companies and individuals to easily deploy Hugging Face Transformers without heavy lifting.
Since then we got a lot of feedback requests from users for new additional features. Thank you for that! BTW. if you have any feedback or feature ideas feel free to open a thread in the [forum](https://discuss.huggingface.co/c/sagemaker/17).
Below can find the currently supported features + the `newly` supported features.
## Features
- Deploy Hugging Face Transformers from [hf.co/models](http://hf.co/models) to Amazon SageMaker
- Deploy Hugging Face Transformers from Amazon S3 to Amazon SageMaker
- 🆕 Deploy private Hugging Face Transformers from [hf.co/models](http://hf.co/models) to Amazon SageMaker with a `hf_api_token`
- 🆕 Add [Autoscaling](https://docs.aws.amazon.com/sagemaker/latest/dg/endpoint-auto-scaling.html) to your Amazon SageMaker endpoints with `autoscaling` configuration
- 🆕 Deploy [Asynchronous Inference Endpoints](https://docs.aws.amazon.com/sagemaker/latest/dg/async-inference.html) either from the [hf.co/models](http://hf.co/models) or Amazon S3
You can find examples for all use cases in the [repository](https://github.com/philschmid/terraform-aws-sagemaker-huggingface) of the module or in the [registry](https://registry.terraform.io/modules/philschmid/sagemaker-huggingface/aws/latest). In addition to the feature updates, we also improved the naming by adding a random lower case string at the end of all resources.
Registry: [https://registry.terraform.io/modules/philschmid/sagemaker-huggingface/aws/latest](https://registry.terraform.io/modules/philschmid/sagemaker-huggingface/aws/latest)
Github: https://github.com/philschmid/terraform-aws-sagemaker-huggingface
Let's test some of the new features and let us deploy an Asynchronous Inference Endpoint with autoscaling to zero.
---
## How to deploy Asynchronous Endpoint with Autoscaling using the \***\*[sagemaker-huggingface](https://registry.terraform.io/modules/philschmid/sagemaker-huggingface/aws/latest) terraform module**
Before we get started, make sure you have the [Terraform](https://learn.hashicorp.com/tutorials/terraform/install-cli) installed and configured, as well as access to AWS Credentials to create the necessary services. [[Instructions](https://learn.hashicorp.com/tutorials/terraform/aws-build?in=terraform/aws-get-started#prerequisites)]
**What are we going to do:**
- create a new Terraform configuration
- initialize the AWS provider and our module
- deploy our Asynchronous Endpoint
- test the endpoint
- destroy the infrastructure
If you want to learn about Asynchronous Inference you can check out my blog post: [“**Asynchronous Inference with Hugging Face Transformers and Amazon SageMaker”**](https://www.philschmid.de/sagemaker-huggingface-async-inference)
### Create a new Terraform configuration
Each Terraform configuration must be in its own directory including a `main.tf` file. Our first step is to create the `distilbert-terraform` directory with a `main.tf` file.
```bash
mkdir async-terraform
touch async-terraform/main.tf
cd async-terraform
```
### Initialize the AWS provider and our module
Next, we need to open the `main.tf` in a text editor and add the `aws` provider as well as our `module`.
_Note: the snippet below assumes that you have an AWS profile `default` configured with the needed permissions_
```bash
provider "aws" {
profile = "default"
region = "us-east-1"
}
# create bucket for async inference for inputs & outputs
resource "aws_s3_bucket" "async_inference_bucket" {
bucket = "async-inference-bucket"
}
module "huggingface_sagemaker" {
source = "philschmid/sagemaker-huggingface/aws"
version = "0.5.0"
name_prefix = "deploy-hub"
pytorch_version = "1.9.1"
transformers_version = "4.12.3"
instance_type = "ml.g4dn.xlarge"
hf_model_id = "distilbert-base-uncased-finetuned-sst-2-english"
hf_task = "text-classification"
async_config = {
# needs to be a s3 uri
s3_output_path = "s3://async-inference-bucket/async-distilbert"
}
autoscaling = {
min_capacity = 0
max_capacity = 4
scaling_target_invocations = 100
}
}
```
When we create a new configuration — or check out an existing configuration from version control — we need to initialize the directory with `terraform init`.
Initializing will download and install our AWS provider as well as the `sagemaker-huggingface` module.
```bash
terraform init
# Initializing modules...
# Downloading philschmid/sagemaker-huggingface/aws 0.5.0 for huggingface_sagemaker...
# - huggingface_sagemaker in .terraform/modules/huggingface_sagemaker
# Initializing the backend...
# Initializing provider plugins...
# - Finding latest version of hashicorp/random...
# - Finding hashicorp/aws versions matching "~> 4.0"...
# - Installing hashicorp/random v3.1.0...
```
### Deploy the Asynchronous Endpoint
To deploy/apply our configuration we run `terraform apply` command. Terraform will then print out which resources are going to be created and ask us if we want to continue, which can we confirm with `yes`.
```bash
terraform apply
```
Now Terraform will deploy our model to Amazon SageMaker as a real-time endpoint. This can take 2-5 minutes.
### Test the endpoint
To test our deployed endpoint we can use the [aws sdk](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_runtime_InvokeEndpoint.html#API_runtime_InvokeEndpoint_SeeAlso) in our example we are going to use the Python SageMaker SDK (`sagemaker`), but you can easily switch this to use Java, Javascript, .NET, or Go SDK to invoke the Amazon SageMaker endpoint. We are going to use the `sagemaker` SDK since it provides an easy-to-use [AsyncPredictor](https://sagemaker.readthedocs.io/en/stable/api/inference/predictor_async.html) object which does the heavy lifting for uploading the data to Amazon S3 for us.
For initializing our Predictor we need the name of our deployed endpoint, which we can get by inspecting the output of Terraform with `terraform output` or going to the SageMaker service in the AWS Management console and our Amazon S3 bucket defined in our Terraform module.
We create a new file `request.py` with the following snippet.
_Make sure you have configured your credentials (and region) correctly and `sagemaker` installed_
```python
from sagemaker.huggingface import HuggingFacePredictor
from sagemaker.predictor_async import AsyncPredictor
ENDPOINT_NAME = "deploy-hub-ep-rzbiwuva"
ASYNC_S3_PATH = "s3://async-inference-bucket/async-distilbert"
async_predictor = AsyncPredictor(HuggingFacePredictor(ENDPOINT_NAME))
data = {
"inputs": [
"it 's a charming and often affecting journey .",
"it 's slow -- very, very slow",
"the mesmerizing performances of the leads keep the film grounded and keep the audience riveted .",
"the emotions are raw and will strike a nerve with anyone who 's ever had family trauma ."
]
}
res = async_predictor.predict(data=data,input_path=ASYNC_S3_PATH)
print(res)
```
Now we can execute our request.
```python
python3 request.py
# [{'label': 'LABEL_2', 'score': 0.8808117508888245}, {'label': 'LABEL_0', 'score': 0.6126593947410583}, {'label': 'LABEL_2', 'score': 0.9425230622291565}, {'label': 'LABEL_0', 'score': 0.5511414408683777}]
```
### Destroy the infrastructure
To clean up our created resources we can run `terraform destroy`, which will delete all the created resources from the module.
## More Examples
You find examples of how to deploy private Models and use Autoscaling in the [repository](https://github.com/philschmid/terraform-aws-sagemaker-huggingface) of the module or in the [registry](https://registry.terraform.io/modules/philschmid/sagemaker-huggingface/aws/latest).
## Conclusion
The [sagemaker-huggingface](https://registry.terraform.io/modules/philschmid/sagemaker-huggingface/aws/latest) terraform module abstracts all the heavy lifting for deploying Transformer models to Amazon SageMaker away, which enables controlled, consistent and understandable managed deployments after concepts of IaC. This should help companies to move faster and include deployed models to Amazon SageMaker into their existing Applications and IaC definitions.
---
Thanks for reading! If you have any questions, feel free to contact me, through [Github](https://github.com/huggingface/transformers), or on the [forum](https://discuss.huggingface.co/c/sagemaker/17). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Use Sentence Transformers with TensorFlow | https://www.philschmid.de/tensorflow-sentence-transformers | 2022-08-30 | [
"BERT",
"Tensorflow",
"HuggingFace",
"Keras"
] | Learn how to Sentence Transformers model with TensorFlow and Keras for creating document embeddings | In this blog, you will learn how to use a [Sentence Transformers](https://www.sbert.net/) model with TensorFlow and Keras. The blog will show you how to create a custom Keras model to load [Sentence Transformers](https://www.sbert.net/) models and run inference on it to create document embeddings.
[Sentence Transformers](https://www.sbert.net/) is the state-of-the-art library for sentence, text, and image embeddings to build semantic textual similarity, semantic search, or paraphrase mining applications using BERT and Transformers 🔎 1️⃣ ⭐️
![SBERT](/static/blog/tensorflow-sentence-transformers/sentence-transformers.png)
Paper: [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084)
The library is built on top of PyTorch and Hugging Face Transformers so it is compatible with PyTorch models and not with TensorFlow by default.
But since Hugging Face Transformers is compatible with PyTorch and TensorFlow it is possible to load the raw Sentence Transformer models in Tensorflow.
You will learn how to:
1. [Setup Development Environment](#1-setup-development-environment)
2. [Create a custom TensorFlow Model](#2-create-a-custom-tensorflow-model)
3. [Run inference and validate results](#3-run-inference-and-validate-results)
4. [Create e2e model with tokenizer included](#4-create-e2e-model-with-tokenizer-included)
Let's get started! 🚀
## 1. Setup Development Environment
Our first step is to install Transformers, along with tensorflow-text and some other libraries. We are also installing `sentence-transformers` for later use to validate our model and results.
```python
# installing tensorflow extra due to incompatibility with conda and tensorflow-text https://github.com/tensorflow/text/issues/644
!pip install transformers[tf] -q --upgrade
!pip install sentence-transformers -q # needed for validating results
```
## 2. Create a custom TensorFlow Model
When using `sentence-transformers` natively you can run inference by loading your model in the `SentenceTransformer` class and then calling the `.encode()` method. However this only works for PyTorch and we want to use TensorFlow. When calling `.encode()` on your PyTorch model SentenceTransformers will first do a forward pass through the vanilla Hugging Face `AutoModel` class then apply pooling and or normalization.
This means if we want to use TensorFlow we can create a similar `TFSentenceTransformer` class, which does the same thing as `SentenceTransformer` but uses TensorFlow and Keras instead of PyTorch.
Our first step is to create a custom TensorFlow model initalizes our `TFAutoModel` from Transformers and includes helper methods for `mean_pooling` and normalization.
_Note: We focus in this example only on Sentence Transformers, which are not including any additional layers._
```python
import tensorflow as tf
from transformers import TFAutoModel
class TFSentenceTransformer(tf.keras.layers.Layer):
def __init__(self, model_name_or_path, **kwargs):
super(TFSentenceTransformer, self).__init__()
# loads transformers model
self.model = TFAutoModel.from_pretrained(model_name_or_path, **kwargs)
def call(self, inputs, normalize=True):
# runs model on inputs
model_output = self.model(inputs)
# Perform pooling. In this case, mean pooling.
embeddings = self.mean_pooling(model_output, inputs["attention_mask"])
# normalizes the embeddings if wanted
if normalize:
embeddings = self.normalize(embeddings)
return embeddings
def mean_pooling(self, model_output, attention_mask):
token_embeddings = model_output[0] # First element of model_output contains all token embeddings
input_mask_expanded = tf.cast(
tf.broadcast_to(tf.expand_dims(attention_mask, -1), tf.shape(token_embeddings)),
tf.float32
)
return tf.math.reduce_sum(token_embeddings * input_mask_expanded, axis=1) / tf.clip_by_value(tf.math.reduce_sum(input_mask_expanded, axis=1), 1e-9, tf.float32.max)
def normalize(self, embeddings):
embeddings, _ = tf.linalg.normalize(embeddings, 2, axis=1)
return embeddings
```
We can now test our model by selecting and loading a Sentence Transformer from the [Hugging Face Hub](https://huggingface.co/models?library=sentence-transformers,tf&sort=downloads). We are going to use the [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) model, which maps sentences & paragraphs to a 384 dimensional dense vector using mean pooling and normalization.
_Note: Different Sentence-Transformers can have different processing steps, e.g. cls pooling instead of mean pooling or an additional dense layer. For this make sure to the check the model repository and adjust the `TFSentenceTransformer` class."_
```python
from transformers import AutoTokenizer
# Hugging Face model id
model_id = 'sentence-transformers/all-MiniLM-L6-v2'
# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = TFSentenceTransformer(model_id)
# Run inference & create embeddings
payload = ["This is a sentence embedding",
"This is another sentence embedding"]
encoded_input = tokenizer(payload, padding=True, truncation=True, return_tensors='tf')
sentence_embedding = model(encoded_input)
print(sentence_embedding.shape)
# <tf.Tensor: shape=(2, 384), dtype=float32, numpy= array([[[ 3.37564945e-02, 4.20359336e-02, 6.31270036e-02,
# (2, 384)
```
## 3. Run inference and validate results
After we have now successfully created our custom TensorFlow model `TFSentenceTransformer` we should compare our results to the results from the original Sentence Transformers model.
Therefore are we loading our model using `sentence-transformers` and comparing the results.
```python
import numpy as np
from sentence_transformers import SentenceTransformer
compare_input = "This is a sentence embedding, which we will compare between PyTorch and TensorFlow"
# loading sentence transformers
st_model = SentenceTransformer(model_id,device="cpu")
# run inference with sentence transformers
st_embeddings = st_model.encode(compare_input)
# run inference with TFSentenceTransformer
encoded_input = tokenizer(compare_input, return_tensors="tf")
tf_embeddings = model(encoded_input)
# compare embeddings
are_results_close = np.allclose(tf_embeddings.numpy()[0],st_embeddings, atol=7e-8)
print(f"Results close: {are_results_close}")
# Results close: True
```
The created sentence embeddings from our `TFSentenceTransformer` model have less then `0.00000007` difference with the original Sentence Transformers model. This is good enough to validate our model.
## 4. Create e2e model with tokenizer included
One difference between the original Sentence Transformers model and the custom TensorFlow model is that the original model does include a tokenizer. This is not included in the custom TensorFlow model.
The orginial sentence transformer model is using `AutoTokenizer` to tokenize the sentences as well meaning it is not included in our model graph. But we can create a custom model for `BERT` that includes [FastBertTokenizer](https://www.tensorflow.org/text/api_docs/python/text/FastBertTokenizer) from TensorFlow Text. This will allow us to use the tokenizer in our model graph.
The [FastBertTokenizer](https://www.tensorflow.org/text/api_docs/python/text/FastBertTokenizer) got integrated into transformers as [TFBERTTokenizer](https://huggingface.co/docs/transformers/v4.21.2/en/model_doc/bert#transformers.TFBertTokenizer) and works with the Hugging Face assets.
```python
import tensorflow as tf
from transformers import TFAutoModel, TFBertTokenizer
class E2ESentenceTransformer(tf.keras.Model):
def __init__(self, model_name_or_path, **kwargs):
super().__init__()
# loads the in-graph tokenizer
self.tokenizer = TFBertTokenizer.from_pretrained(model_name_or_path, **kwargs)
# loads our TFSentenceTransformer
self.model = TFSentenceTransformer(model_name_or_path, **kwargs)
def call(self, inputs):
# runs tokenization and creates embedding
tokenized = self.tokenizer(inputs)
return self.model(tokenized)
```
We can now create our `E2ESentenceTransformer` model that includes the tokenizer and run inference on it.
```python
# hugging face model id
model_id = 'sentence-transformers/all-MiniLM-L6-v2'
# loading model with tokenizer and sentence transformer
e2e_model = E2ESentenceTransformer(model_id)
# run inference
payload = "This is a sentence embedding"
pred = e2e_model([payload]) # Pass strings straight to model!
print(f"output shape: {pred.shape}")
e2e_model.summary() # prints model summary
# output shape: (1, 384)
# Model: "e2e_sentence_transformer_30"
# _________________________________________________________________
# Layer (type) Output Shape Param #
# =================================================================
# tf_bert_tokenizer_30 (TFBer multiple 0
# tTokenizer)
#
# tf_sentence_transformer_59 multiple 22713216
# (TFSentenceTransformer)
#
# =================================================================
# Total params: 22,713,216
# Trainable params: 22,713,216
# Non-trainable params: 0
# _________________________________________________________________
```
## Conclusion
Thats it. We have now successfully created a custom TensorFlow model that can load a Sentence Transformer model and run inference on it to create document embeddings. This will allow you to integrate Sentence Transformers into your existing and new TensorFlow projects and workflows. We validated that our model creates embeddings that are similar to the original Sentence Transformers model.
And as bonus we looked into how to integrate the tokenization into our model graph as well and created an `E2ESentenceTransformer` model that includes the tokenizer for BERT models, which are achieving state-of-the-art performance on similar tasks.
If you are interested in how to deploy those models with TFServing let me know!
_[Converting SentenceTransformers to Tensorflow](https://skeptric.com/sentencetransformers-to-tensorflow/) is another source by Edward Ross._
---
Thanks for reading. If you have any questions, feel free to contact me, through [Github](https://github.com/huggingface/transformers), or on the [forum](https://discuss.huggingface.co/c/sagemaker/17). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Create custom Github Action in 4 steps | https://www.philschmid.de/create-custom-github-action-in-4-steps | 2020-09-25 | [
"Cloud",
"DevOps",
"Github"
] | Create a custom github action in 4 steps. Also learn how to test it offline and publish it in the Github Action marketplace. | Automation, complexity reduction, reproducibility, maintainability are all advantages that can be realized by a
continuous integration (CI) pipeline. With GitHub Actions, you can build these CI pipelines.
"You can create workflows using actions defined in your repository, open-source Actions in a public repository on
GitHub, or a published Docker container image." -
[Source](https://docs.github.com/en/actions/getting-started-with-github-actions/about-github-actions)
I recently started a new project at work, where I had to implement a new CI pipeline. In this process, I had to call an
API, validate the result, and pass it on. I ended up with a 20 lines long inline script within the `run` section. That
was anything but simple, maintainable, and reusable.
> Well, done 🤦🏻♂️
After this miserable failure, I looked up how to create custom Github Actions. I was pleasantly surprised that it is
very easy to write, test, and publish your own custom Github Action. It took me around 1h to research, implement, test,
deploy, and release my action. You can check it out
[here](https://github.com/marketplace/actions/download-custom-release-asset).
---
## Tutorial
In the following tutorial, we are going to create a custom Github Action in 4 steps. Our Action will execute a simple
bash script. This bash script will call the [pokeapi.co](http://pokeapi.co/) API-Endpoint with a PokeDex ID as a
parameter. Then we will parse the result and return the name of the pokemon. After that, we will `echo` the result of
our bash script in our Github Actions workflow.
**What are we going to do:**
- Create a Github repository with a license
- Create `action.yaml` file with inputs and outputs
- Create our bash script
- Create a `Dockerfile`
_optional:_
- _test it locally with `act`_
- _publish the action to the Github Action marketplace_
You can find everything we do in this [Github repository](https://github.com/philschmid/blog-custom-github-action).
---
## Create a Github repository with a license
First, [we create a repository](https://github.com/new). We can directly add a `.gitignore`, `README.md`, and license
file. For the Repository name, we can use whatever we want.
![create Github Repository](/static/blog/create-custom-github-action-in-4-steps/create-github-repository.png)
Next, clone the repository to your local machine and open it with your preferred IDE.
```bash
git clone https://github.com/philschmid/blog-custom-github-action.git && \
cd blog-custom-github-action && \
code .
```
---
## Create `action.yaml` file with inputs and outputs
Next, we create an `action.yaml` file in our repository. The `action.yaml` is the metadata file and defines the inputs,
outputs, and main entrypoint for our Action. It uses `YAML` as syntax.
In our example, we use one input and one output. If you want detailed documentation for the `action.yaml` and learn more
about the configurations take a look
[here](https://docs.github.com/en/actions/creating-actions/metadata-syntax-for-github-actions).
The `action.yaml` always has to include this:
- `name`: The name of your Action. This must be globally unique if you want to publish your Github Action to the
marketplace
- `description` : A short description of what your Action is doing
- `inputs`: defines the input parameters you can pass into your bash script. You can access them with
`$INPUT_{Variable}` in our example `$INPUT_POKEMON_ID`
- `outputs`: defines the output parameters that you can us later in another workflow step
- `runs`: defines where and what the action will execute in our case it will run a docker
The `action.yaml` we are going to use looks like that.
```yaml
# action.yaml
name: 'Blog Custom Github Action'
description: 'Call an API and get the result'
inputs:
pokemon_id:
description: 'number of the pokemon in the pokedex'
required: true
default: 1
outputs:
pokemon_name:
description: 'Name des Pokemons'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.pokemon_id}}
```
---
## Create a bash script
In the third step, we create our bash script called `entrypoint.sh`. This script will be executed in the Action.
For demo purposes, we use a simple script, which calls the [pokeapi.co](http://pokeapi.co/) API and parses the return
value with the [jq processor](https://stedolan.github.io/jq/) to get the name of the pokemon. In order to create an
output for our Action we need to use a Github Action specific syntax: `echo "::set-output name=<output name>::<value>"`.
```bash
#!/bin/bash
set -e
api_url="https://pokeapi.co/api/v2/pokemon/${INPUT_POKEMON_ID}"
echo $api_url
pokemon_name=$(curl "${api_url}" | jq ".name")
echo $pokemon_name
echo "::set-output name=pokemon_name::$pokemon_name"
```
---
## Create a `Dockerfile`
The last step in our 4 step tutorial is to create a `Dockerfile`. If you are not familiar with docker and `Dockerfile`
check out
[Dockerfile support for GitHub Actions](https://docs.github.com/en/actions/creating-actions/dockerfile-support-for-github-actions).
```docker
# Base image
FROM alpine:latest
# installes required packages for our script
RUN apk add --no-cache \
bash \
ca-certificates \
curl \
jq
# Copies your code file repository to the filesystem
COPY entrypoint.sh /entrypoint.sh
# change permission to execute the script and
RUN chmod +x /entrypoint.sh
# file to execute when the docker container starts up
ENTRYPOINT ["/entrypoint.sh"]
```
---
That's it. We've done it.✅ To use it we create a new workflow file in `.github/workflows` and add our Action as a
`step` .
```yaml
- name: Get Pokemon name
uses: ./ # Uses an action in the root directory
id: pokemon
with:
pokemon_id: 150
```
To access the output of our Action we have to define the `id` attribute in our step.
```yaml
# Use the pokemon_name output from our action (id:pokemon)
- name: Get the pokemon
run: echo "${{ steps.pokemon.outputs.pokemon_name }} attack"
```
---
## Optional
The previous 4 steps have shown us how to build our own custom Github Action. But we never tested, if it works as
expected. In the following two additional steps we test our Github Action locally with
[act](https://github.com/nektos/act) and afterwards publish it to the Github Marketplace.
---
### Local testing with `act`
[Act](https://github.com/nektos/act) is an open-source CLI toolkit written in Go. It allows us to execute and test our
Github Actions locally. It supports environment variables, secrets, and custom events.
[Definitely check it out.](https://github.com/nektos/act) Instructions for the installation can be found
[here](https://github.com/nektos/act#installation).
Before we can test our Action we have to create a Github Workflow in `.github/worklfows` .
```yaml
on: [push]
jobs:
custom_test:
runs-on: ubuntu-latest
name: We test it locally with act
steps:
- name: Get Pokemon name
uses: ./ # Uses an action in the root directory
id: pokemon
with:
pokemon_id: 150
- name: Get the pokemon
run: echo "${{ steps.pokemon.outputs.pokemon_name }} attack"
```
Afterwards we can run `act` in our terminal and it should run our action.
```bash
act
```
![act local test](/static/blog/create-custom-github-action-in-4-steps/act-local-test.png)
As a result, we can see our Action runs successfully. It also outputs our `pokemon_name` in the second last line
`"mewtwo attack"`.
---
## Publish the Action to the marketplace
To be able to publish a custom Github Action to the marketplace we need a globally unique name. There must be no Action
with this name on the marketplace.
After we committed and pushed our files to our repository and go to the web console we should see something like that.
![github repository release](/static/blog/create-custom-github-action-in-4-steps/github-repository-release.png)
If we click on "draft a release" we should see a custom release page especially for Github Actions.
![github action description](/static/blog/create-custom-github-action-in-4-steps/github-action-description.png)
If we have green checks ✅ for name and description we are able to publish the Action to the marketplace.
Adding an icon and color can be done in the `action.yaml`. You can check out how to do that
[here](https://docs.github.com/en/actions/creating-actions/metadata-syntax-for-github-actions#branding).
The next step is to create a new release with a title. Therefore we scroll down a bit and add a release version and
title to it.
![create release](/static/blog/create-custom-github-action-in-4-steps/create-release.png)
After that we can click "publish release" and our custom Github Action is published to the marketplace. We can find it
on [https://github.com/marketplace](https://github.com/marketplace) by searching "blog-custom".
![github marketplace search](/static/blog/create-custom-github-action-in-4-steps/github-marketplace-search.png)
Now we are able to use our custom Github Action in our workflows without having it in the project folder.
```yaml
- name: Get Pokemon name
uses: philschmid/blog-custom-github-action@master # or @release_version (e.g. v1)
id: pokemon
with:
pokemon_id: 150
```
_This is only a quick example on how to publish a Github Action to the marketplace. There are many more custom settings,
such as category, icon, and brand that we have not made._
---
Thanks for reading. I hope I was able to prevent some Github Action failures.
There are more possibilities to create your own Github Actions for example with Node. You can find more information
[here](https://docs.github.com/en/actions/creating-actions). Furthermore, Github has fantastic
[documentation for Github Action](https://docs.github.com/en/actions).
You can find the code for the tutorial in this
[Github repository](https://github.com/philschmid/blog-custom-github-action).
---
If you have any questions, feel free to contact me or comment on this article. You can also connect with me on
[Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Deploy LayoutLM with Hugging Face Inference Endpoints | https://www.philschmid.de/inference-endpoints-layoutlm | 2022-10-06 | [
"DocumentAI",
"HuggingFace",
"Transformers",
"LayoutLM"
] | Learn how to deploy LayoutLM for document-understand using Hugging Face Inference Endpoints. | In this blog, you will learn how to deploy a fine-tune [LayoutLM (v1)](https://huggingface.co/docs/transformers/model_doc/layoutlm) for document-understand using [Hugging Face Inference Endpoints](https://huggingface.co/inference-endpoints). LayoutLM is a multimodal Transformer model for document image understanding and information extraction transformers and can be used form understanding and receipt understanding. LayoutLM (v1) is the only model in the LayoutLM family with an MIT-license, which allows it to be used for commercial purposes compared to other LayoutLMv2/LayoutLMv3.
If you want to learn how to fine-tune LayoutLM, you should check out my previous blog post, [“Document AI: Fine-tuning LayoutLM for document-understanding using Hugging Face Transformers”](https://www.philschmid.de/fine-tuning-layoutlm)
Before we can get started, make sure you meet all of the following requirements:
1. An Organization/User with an active plan and _WRITE_ access to the model repository.
2. Can access the UI: [https://ui.endpoints.huggingface.co](https://ui.endpoints.huggingface.co/endpoints)
The [Tutorial](#tutorial-deploy-layoutlm-and-send-requests) will cover how to:
1. [Deploy the custom handler as an Inference Endpoint](#1-deploy-the-custom-handler-as-an-inference-endpoint)
2. [Send HTTP request using Python](#2-send-http-request-using-python)
3. [Draw result on image](#3-draw-result-on-image)
## What is Hugging Face Inference Endpoints?
[🤗 Inference Endpoints](https://huggingface.co/inference-endpoints) offers a secure production solution to easily deploy Machine Learning models on dedicated and autoscaling infrastructure managed by Hugging Face.
A Hugging Face Inference Endpoint is built from a [Hugging Face Model Repository](https://huggingface.co/models). It supports all the [Transformers and Sentence-Transformers tasks](https://huggingface.co/docs/inference-endpoints/supported_tasks) and any arbitrary ML Framework through easy customization by adding a [custom inference handler.](https://huggingface.co/docs/inference-endpoints/guides/custom_handler)
This [custom inference handler](https://huggingface.co/docs/inference-endpoints/guides/custom_handler) can be used to implement simple inference pipelines for ML Frameworks like Keras, Tensorflow, and scit-kit learn or can be used to add custom business logic to your existing transformers pipeline.
## Tutorial: Deploy LayoutLM and Send requests
In this tutorial, you will learn how to deploy a [LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm) to [Hugging Face Inference Endpoints](https://huggingface.co/inference-endpoints) and how you can integrate it via an API into your products.
This tutorial is not covering how you create the custom handler for inference. If you want to learn how to create a custom Handler for Inference Endpoints, you can either checkout the [documentation](https://huggingface.co/docs/inference-endpoints/guides/custom_handler) or go through [“Custom Inference with Hugging Face Inference Endpoints”](https://www.philschmid.de/custom-inference-handler)
We are going to deploy [philschmid/layoutlm-funsd](https://huggingface.co/philschmid/layoutlm-funsd) which implements the following `handler.py`
```python
from typing import Dict, List, Any
from transformers import LayoutLMForTokenClassification, LayoutLMv2Processor
import torch
from subprocess import run
# install tesseract-ocr and pytesseract
run("apt install -y tesseract-ocr", shell=True, check=True)
run("pip install pytesseract", shell=True, check=True)
# helper function to unnormalize bboxes for drawing onto the image
def unnormalize_box(bbox, width, height):
return [
width * (bbox[0] / 1000),
height * (bbox[1] / 1000),
width * (bbox[2] / 1000),
height * (bbox[3] / 1000),
]
# set device
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
class EndpointHandler:
def __init__(self, path=""):
# load model and processor from path
self.model = LayoutLMForTokenClassification.from_pretrained(path).to(device)
self.processor = LayoutLMv2Processor.from_pretrained(path)
def __call__(self, data: Dict[str, bytes]) -> Dict[str, List[Any]]:
"""
Args:
data (:obj:):
includes the deserialized image file as PIL.Image
"""
# process input
image = data.pop("inputs", data)
# process image
encoding = self.processor(image, return_tensors="pt")
# run prediction
with torch.inference_mode():
outputs = self.model(
input_ids=encoding.input_ids.to(device),
bbox=encoding.bbox.to(device),
attention_mask=encoding.attention_mask.to(device),
token_type_ids=encoding.token_type_ids.to(device),
)
predictions = outputs.logits.softmax(-1)
# post process output
result = []
for item, inp_ids, bbox in zip(
predictions.squeeze(0).cpu(), encoding.input_ids.squeeze(0).cpu(), encoding.bbox.squeeze(0).cpu()
):
label = self.model.config.id2label[int(item.argmax().cpu())]
if label == "O":
continue
score = item.max().item()
text = self.processor.tokenizer.decode(inp_ids)
bbox = unnormalize_box(bbox.tolist(), image.width, image.height)
result.append({"label": label, "score": score, "text": text, "bbox": bbox})
return {"predictions": result}
```
## 1. Deploy the custom handler as an Inference Endpoint
UI: [https://ui.endpoints.huggingface.co/](https://ui.endpoints.huggingface.co/)
the first step is to deploy our model as an Inference Endpoint. We can deploy our custom Custom Handler the same way as a regular Inference Endpoint.
Select the repository, the cloud, and the region, adjust the instance and security settings, and deploy.
![repository](/static/blog/inference-endpoints-layoutlm/repository.png)
The Inference Endpoint Service will check during the creation of your Endpoint if there is a `handler.py` available and valid and will use it for serving requests no matter which “Task” you select.
_Note: Make sure to check that the “Task” in the Advanced Config is “Custom”. This will replace the inference widget with the custom Inference widget too easily test our model._
![task_selection](/static/blog/inference-endpoints-layoutlm/task_selection.png)
After deploying our endpoint, we can test it using the inference widget. Since we have a `Custom` task, we can directly upload a form as “file input”.
![inference_widget](/static/blog/inference-endpoints-layoutlm/inference_widget.png)
## 2. Send HTTP request using Python
Hugging Face Inference endpoints can directly work with binary data, this means that we can directly send our image from our document to the endpoint. We are going to use `requests` to send our requests. (make your you have it installed `pip install requests`)
```python
import json
import requests as r
import mimetypes
ENDPOINT_URL="" # url of your endpoint
HF_TOKEN="" # organization token where you deployed your endpoint
def predict(path_to_image:str=None):
with open(path_to_image, "rb") as i:
b = i.read()
headers= {
"Authorization": f"Bearer {HF_TOKEN}",
"Content-Type": mimetypes.guess_type(path_to_image)[0]
}
response = r.post(ENDPOINT_URL, headers=headers, data=b)
return response.json()
prediction = predict(path_to_image="path_to_your_image.png")
print(prediction)
# {'predictions': [{'label': 'I-ANSWER', 'score': 0.4823932945728302, 'text': '[CLS]', 'bbox': [0.0, 0.0, 0.0, 0.0]}, {'label': 'B-HEADER', 'score': 0.992474377155304, 'text': 'your', 'bbox': [1712.529, 181.203, 1859.949, 228.88799999999998]},
```
## 3. Draw result on image
To get a better understanding of what the model predicted you can also draw the predictions on the provided image.
```python
from PIL import Image, ImageDraw, ImageFont
# draw results on image
def draw_result(path_to_image,result):
image = Image.open(path_to_image)
label2color = {
"B-HEADER": "blue",
"B-QUESTION": "red",
"B-ANSWER": "green",
"I-HEADER": "blue",
"I-QUESTION": "red",
"I-ANSWER": "green",
}
# draw predictions over the image
draw = ImageDraw.Draw(image)
font = ImageFont.load_default()
for res in result:
draw.rectangle(res["bbox"], outline="black")
draw.rectangle(res["bbox"], outline=label2color[res["label"]])
draw.text((res["bbox"][0] + 10, res["bbox"][1] - 10), text=res["label"], fill=label2color[res["label"]], font=font)
return image
draw_result("path_to_your_image.png", prediction["predictions"])
```
![result](/static/blog/inference-endpoints-layoutlm/result.png)
## Conclusion
That's it we successfully deploy our LayoutLM to Hugging Face Inference Endpoints and run some predictions.
To underline this again, we created a managed, secure, scalable inference endpoint that runs our custom handler, including our custom logic. This will allow Data scientists and Machine Learning Engineers to focus on R&D, improving the model rather than fiddling with MLOps topics.
Now, it's your turn! [Sign up](https://ui.endpoints.huggingface.co/new) and create your custom handler within a few minutes!
---
Thanks for reading! If you have any questions, feel free to contact me, through [Github](https://github.com/huggingface/transformers), or on the [forum](https://discuss.huggingface.co/c/optimum/59). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Distributed training on multilingual BERT with Hugging Face Transformers & Amazon SageMaker | https://www.philschmid.de/pytorch-distributed-training-transformers | 2022-01-25 | [
"HuggingFace",
"AWS",
"BERT",
"PyTorch"
] | Learn how to run large-scale distributed training using multilingual BERT on over 1 million data points with Hugging Face Transformers & Amazon SageMaker | Welcome to this end-to-end multilingual Text-Classification example using PyTorch. In this demo, we will use the Hugging Faces `transformers` and `datasets` library together with `Pytorch` to fine-tune a multilingual transformer for text-classification. This example is a derived version of the [text-classificiaton.ipynb](https://github.com/philschmid/transformers-pytorch-text-classification/blob/main/text-classification.ipynb) notebook and uses Amazon SageMaker for distributed training. In the [text-classificiaton.ipynb](https://github.com/philschmid/transformers-pytorch-text-classification/blob/main/text-classification.ipynb) we showed how to fine-tune `distilbert-base-multilingual-cased` on the `amazon_reviews_multi` dataset for `sentiment-analysis`. This dataset has over 1.2 million data points, which is huge. Running training would take on 1x NVIDIA V100 takes around 6,5h for `batch_size` 16, which is quite long.
To scale and accelerate our training we will use [Amazon SageMaker](https://aws.amazon.com/de/sagemaker/), which provides two strategies for [distributed training](https://huggingface.co/docs/sagemaker/train#distributed-training), [data parallelism](https://huggingface.co/docs/sagemaker/train#data-parallelism) and model parallelism. Data parallelism splits a training set across several GPUs, while [model parallelism](https://huggingface.co/docs/sagemaker/train#model-parallelism) splits a model across several GPUs. We are going to use [SageMaker Data Parallelism](https://aws.amazon.com/blogs/aws/managed-data-parallelism-in-amazon-sagemaker-simplifies-training-on-large-datasets/), which has been built into the [Trainer](https://huggingface.co/transformers/main_classes/trainer.html) API. To be able use data-parallelism we only have to define the `distribution` parameter in our `HuggingFace` estimator.
I moved the "training" part of the [text-classificiaton.ipynb](https://github.com/philschmid/transformers-pytorch-text-classification/blob/main/text-classification.ipynb) notebook into a separate training script [train.py](./scripts/train.py), which accepts the same hyperparameter and can be run on Amazon SageMaker using the `HuggingFace` estimator.
Our goal is to decrease the training duration by scaling our global/effective batch size from 16 up to 128, which is 8x bigger than before. For monitoring our training we will use the new Training Metrics support by the [Hugging Face Hub](hf.co/models)
### Installation
```python
#!pip install sagemaker
!pip install transformers datasets tensorboard datasets[s3] --upgrade
```
This example will use the [Hugging Face Hub](https://huggingface.co/models) as remote model versioning service. To be able to push our model to the Hub, you need to register on the [Hugging Face](https://huggingface.co/join).
If you already have an account you can skip this step.
After you have an account, we will use the `notebook_login` util from the `huggingface_hub` package to log into our account and store our token (access key) on the disk.
```python
from huggingface_hub import notebook_login
notebook_login()
```
## Setup & Configuration
In this step we will define global configurations and parameters, which are used across the whole end-to-end fine-tuning proccess, e.g. `tokenizer` and `model` we will use.
```python
import sagemaker
sess = sagemaker.Session()
# sagemaker session bucket -> used for uploading data, models and logs
# sagemaker will automatically create this bucket if it not exists
sagemaker_session_bucket=None
if sagemaker_session_bucket is None and sess is not None:
# set to default bucket if a bucket name is not given
sagemaker_session_bucket = sess.default_bucket()
role = sagemaker.get_execution_role()
sess = sagemaker.Session(default_bucket=sagemaker_session_bucket)
print(f"sagemaker role arn: {role}")
print(f"sagemaker bucket: {sess.default_bucket()}")
print(f"sagemaker session region: {sess.boto_region_name}")
```
_Note: The execution role is only available when running a notebook within SageMaker (SageMaker Notebook Instances or Studio). If you run `get_execution_role` in a notebook not on SageMaker, expect a region error._
You can comment in the cell below and provide a an IAM Role name with SageMaker permissions to setup your environment out side of SageMaker.
```python
# import sagemaker
# import boto3
# import os
# os.environ["AWS_DEFAULT_REGION"]="your-region"
# # This ROLE needs to exists with your associated AWS Credentials and needs permission for SageMaker
# ROLE_NAME='role-name-of-your-iam-role-with-right-permissions'
# iam_client = boto3.client('iam')
# role = iam_client.get_role(RoleName=ROLE_NAME)['Role']['Arn']
# sess = sagemaker.Session()
# print(f"sagemaker role arn: {role}")
# print(f"sagemaker bucket: {sess.default_bucket()}")
# print(f"sagemaker session region: {sess.boto_region_name}")
```
In this example are we going to fine-tune the [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) a multilingual DistilBERT model.
```python
model_id = "distilbert-base-multilingual-cased"
# name for our repository on the hub
model_name = model_id.split("/")[-1] if "/" in model_id else model_id
repo_name = f"{model_name}-sentiment"
```
## Dataset & Pre-processing
As Dataset we will use the [amazon_reviews_multi](https://huggingface.co/datasets/amazon_reviews_multi) a multilingual text-classification. The dataset contains reviews in English, Japanese, German, French, Chinese and Spanish, collected between November 1, 2015 and November 1, 2019. Each record in the dataset contains the review text, the review title, the star rating, an anonymized reviewer ID, an anonymized product ID and the coarse-grained product category (e.g. ‘books’, ‘appliances’, etc.) The corpus is balanced across stars, so each star rating constitutes 20% of the reviews in each language.
```python
dataset_id="amazon_reviews_multi"
dataset_config="all_languages"
seed=33
```
To load the `amazon_reviews_multi` dataset, we use the `load_dataset()` method from the 🤗 Datasets library.
```python
from datasets import load_dataset
dataset = load_dataset(dataset_id,dataset_config)
```
### Pre-processing & Tokenization
The [amazon_reviews_multi](https://huggingface.co/datasets/amazon_reviews_multi) has 5 classes (`stars`) to match those into a `sentiment-analysis` task we will map those star ratings to the following classes `labels`:
- `[1-2]`: `Negative`
- `[3]`: `Neutral`
- `[4-5]`: `Positive`
Those `labels` can be later used to create a user friendly output after we fine-tuned our model.
```python
from datasets import ClassLabel
def map_start_to_label(review):
if review["stars"] < 3:
review["stars"] = 0
elif review["stars"] == 3:
review["stars"] = 1
else:
review["stars"] = 2
return review
# convert 1-5 star reviews to 0,1,2
dataset = dataset.map(map_start_to_label)
# convert feature from Value to ClassLabel
class_feature = ClassLabel(names=['negative','neutral', 'positive'])
dataset = dataset.cast_column("stars", class_feature)
# rename our target column to labels
dataset = dataset.rename_column("stars","labels")
# drop columns that are not needed
dataset = dataset.remove_columns(['review_id', 'product_id', 'reviewer_id', 'review_title', 'language', 'product_category'])
```
Before we prepare the dataset for training. Lets take a quick look at the class distribution of the dataset.
```python
import pandas as pd
df = dataset["train"].to_pandas()
df.hist()
```
![distribution](/static/blog/pytorch-distributed-training-transformers/distribution.png)
The Distribution is not perfect, but lets give it a try and improve on this later.
To train our model we need to convert our "Natural Language" to token IDs. This is done by a 🤗 Transformers Tokenizer which will tokenize the inputs (including converting the tokens to their corresponding IDs in the pretrained vocabulary). If you are not sure what this means check out [chapter 6](https://huggingface.co/course/chapter6/1?fw=tf) of the Hugging Face Course.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id)
```
Additionally we add the `truncation=True` and `max_length=512` to align the length and truncate texts that are bigger than the maximum size allowed by the model.
```python
def process(examples):
tokenized_inputs = tokenizer(
examples["review_body"], truncation=True, max_length=512
)
return tokenized_inputs
tokenized_datasets = dataset.map(process, batched=True)
tokenized_datasets["train"].features
```
Before we can start our distributed Training, we need to upload our already pre-processed dataset to Amazon S3. Therefore we will use the built-in utils of `datasets`
```python
import botocore
from datasets.filesystems import S3FileSystem
s3 = S3FileSystem()
# save train_dataset to s3
training_input_path = f's3://{sess.default_bucket()}/{dataset_id}/train'
tokenized_datasets["train"].save_to_disk(training_input_path, fs=s3)
# save validation_dataset to s3
eval_input_path = f's3://{sess.default_bucket()}/{dataset_id}/test'
tokenized_datasets["validation"].save_to_disk(eval_input_path, fs=s3)
```
## Creating an Estimator and start a training job
Last step before we can start our managed training is to define our Hyperparameters, create our sagemaker `HuggingFace` estimator and configure distributed training.
```python
from sagemaker.huggingface import HuggingFace
from huggingface_hub import HfFolder
# hyperparameters, which are passed into the training job
hyperparameters={
'model_id':'distilbert-base-multilingual-cased',
'epochs': 3,
'per_device_train_batch_size': 16,
'per_device_eval_batch_size': 16,
'learning_rate': 3e-5*8,
'fp16': True,
# logging & evaluation strategie
'strategy':'steps',
'steps':5_000,
'save_total_limit':2,
'load_best_model_at_end':True,
'metric_for_best_model':"f1",
# push to hub config
'push_to_hub': True,
'hub_model_id': 'distilbert-base-multilingual-cased-sentiment-2',
'hub_token': HfFolder.get_token()
}
# configuration for running training on smdistributed Data Parallel
distribution = {'smdistributed':{'dataparallel':{ 'enabled': True }}}
# create the Estimator
huggingface_estimator = HuggingFace(
entry_point = 'train.py',
source_dir = './scripts',
instance_type = 'ml.p3.16xlarge',
instance_count = 1,
role = role,
transformers_version = '4.12',
pytorch_version = '1.9',
py_version = 'py38',
hyperparameters = hyperparameters,
distribution = distribution
)
```
Since, we are using SageMaker Data Parallelism our total_batch_size will be per_device_train_batch_size \* n_gpus.
```python
# define a data input dictonary with our uploaded s3 uris
data = {
'train': training_input_path,
'eval': eval_input_path
}
# starting the train job with our uploaded datasets as input
# setting wait to False to not expose the HF Token
huggingface_estimator.fit(data,wait=False)
```
Since we are using the Hugging Face Hub intergration with Tensorboard we can inspect our progress directly on the hub, as well as testing checkpoints during the training.
```python
from huggingface_hub import HfApi
whoami = HfApi().whoami()
username = whoami['name']
print(f"https://huggingface.co/{username}/{hyperparameters['hub_model_id']}")
```
![tensorboard](/static/blog/pytorch-distributed-training-transformers/tensorboard.png)
## Conclusion
We managed to scale our training from 1x GPU to 8x GPU without any issues or code changes required. We used the Python SageMaker SDK to create our managed training job and only needed to provide some information about the environment our training should run, our training script and our hyperparameters.
With this we were able to reduce the training time from 6,5 hours to ~1,5 hours, which is huge! With this we can evaluate and test ~5x more models than before.
---
You can find the code [here](https://github.com/philschmid/transformers-pytorch-text-classification) and feel free to open a thread on the [forum](https://discuss.huggingface.co/).
Thanks for reading. If you have any questions, feel free to contact me, through [Github](https://github.com/huggingface/transformers), or on the [forum](https://discuss.huggingface.co/c/sagemaker/17). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Serverless Machine Learning Applications with Hugging Face Gradio and AWS Lambda | https://www.philschmid.de/serverless-gradio | 2022-11-15 | [
"Serverless",
"HuggingFace",
"AWS",
"Gradio"
] | Learn how to deploy a Hugging Face Gradio Application using Hugging Face Transformers to AWS Lambda for serverless workloads. | _“Serverless computing is a method of providing backend services on an as-used basis. A serverless provider allows users to write and deploy code without […] worrying about the underlying infrastructure ”_ [[What is serverless computing?](https://www.cloudflare.com/en-gb/learning/serverless/what-is-serverless/)]
Serverless computing can offer advantages over traditional cloud-based or server-centric infrastructure, like greater scalability, more flexibility, and quicker time to release, all at a reduced cost. Serverless computing is/was mostly used for web development and not for machine learning, due to its computing and resource-intensive nature.
But with the increasing request and improvements of Serverless Cloud services, machine learning becomes more and more suitable for those environments, if you accept the existing trade-offs, e.g., [cold starts](https://aws.amazon.com/de/blogs/compute/operating-lambda-performance-optimization-part-1/) or limited hardware.
This blog covers how to deploy a Gradio application to AWS Lambda, you will learn how to:
1. [Create a Gradio application for `sentiment-analysis` using `transformers` ](#1-create-a-gradio-application-for-sentiment-analysis-using-transformers)
2. [Test the local Gradio application](#2-test-the-local-gradio-application)
3. [Deploy Gradio application to AWS Lambda with AWS CDK](#3-deploy-gradio-application-to-aws-lambda-with-aws-cdk)
4. [Test the serverless Gradio application](#4-test-the-serverless-gradio-application)
5. [_Optional_: embed the Gradio application into a website](#5-optional-embed-the-gradio-application-into-a-website)
You will find the complete code for it in this [GitHub repository.](https://github.com/philschmid/serverless-machine-learning/tree/main)
## What is AWS Lambda?
[AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a serverless computing service that lets you run code without managing servers. It executes your code only when required and scales automatically, from a few requests per day to thousands per second.
## What is Hugging Face Gradio?
[Hugging Face Gradio](https://www.gradio.app/) is a Python library to rapidly build machine learning applications with a friendly web interface that anyone can use and share. Gradio supports an intuitive interface for machine learning use cases, which use text, images, audio, 3D objects, and more.
## 1. Create a Gradio application for `sentiment-analysis` using `transformers`
The first step is to create a Gradio application, which we can later deploy to AWS Lambda as our serverless application. We are going to create a `sentiment-analysis` application using Hugging Face Transformers. We first need to create a file with all the Python dependencies we need and install them. Therefore we create a `requirements.txt` file in a new directory, e.g., `serverless_gradio/`, and include `gradio`, `transformers`, and `torch` as dependencies.
```bash
# requirements.txt
gradio==3.1.4
transformers
torch
```
We can now install them with `pip install -r requirements.txt`. After the installation, we can create our `app.py` file, which includes our gradio application. This blog doesn’t cover details on how to create Gradio applications. If you are new to gradio you should check out the [Introduction to Gradio](https://huggingface.co/course/chapter9/1?fw=pt) section in the Hugging Face course or the [Quickstart](https://gradio.app/getting_started/) guide by the Gradio Team
```python
# app.py
import gradio as gr
from transformers import pipeline
# load transformers pipeline from a local directory (model)
clf = pipeline("sentiment-analysis", model="model/")
# predict function used by gradio
def sentiment(payload):
prediction = clf(payload, return_all_scores=True)
# convert list to dict
result = {}
for pred in prediction[0]:
result[pred["label"]] = pred["score"]
return result
# create gradio interface, with text input and dict output
demo = gr.Interface(
fn=sentiment,
inputs=gr.Textbox(placeholder="Enter a positive or negative sentence here..."),
outputs="label",
interpretation="default",
examples=[["I Love Serverless Machine Learning"], ["Running Gradio on AWS Lambda is amazing"]],
allow_flagging="never",
)
# run the app
demo.launch(server_port=8080, enable_queue=False)
```
Our Gradio application will have one text input and will use the `transformers` pipeline to run predictions on the “text” input. The pipeline will use a model we provide through a local directory (`model/`), to reduce the overhead of downloading it from the [Hugging Face Hub](https://huggingface.co/models) when running on AWS Lambda.
We use a python script to download and save our model into the directory. Therefore we create a `download_model.py` file and add the following code.
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
# hugging face model id
model_id = "distilbert-base-uncased-finetuned-sst-2-english"
AutoModelForSequenceClassification.from_pretrained(model_id).save_pretrained("model")
AutoTokenizer.from_pretrained(model_id).save_pretrained("model")
```
Next, we need to run the script with `python download_model.py` , which will load our model from the [Hugging Face Hub](https://huggingface.co/models) and save it in a local `model/`. After that, we are ready to test our application.
## 2. Test the local Gradio application
After we create our `app.py` and download our model to `model/` we can start our application.
```bash
python app.py
```
Our gradio applications get now started, and we should see a terminal output with a local URL to open
```bash
Running on local URL: http://127.0.0.1:8080/
```
If we now open this URL in the browser we should see our application and can test it.
![local-gradio-application](/static/blog/serverless-gradio/local-gradio.png)
## 3. Deploy Gradio application to AWS Lambda with AWS CDK
We now have a working application the next step is to deploy it to AWS Lambda. Before we can start deploying, make sure you have the **[AWS CDK installed](https://docs.aws.amazon.com/cdk/latest/guide/getting_started.html#getting_started_install)** and **[configured your AWS credentials](https://docs.aws.amazon.com/cdk/latest/guide/getting_started.html#getting_started_prerequisites)**. In addition, we also need to install some Python dependencies
```bash
pip install "aws-cdk-lib>=2.0.0" "constructs>=10.0.0"
```
Like Gradio, we are not covering the creation of the IaC resources (`cdk.py`) if you want to learn more about the [AWS Example](https://docs.aws.amazon.com/cdk/v2/guide/work-with-cdk-python.html). We now need to create a `cdk.json` and `cdk.py`which contains the infrastructure creation for our AWS Lambda function.
The `cdk.json` needs to include
```json
{
"app": "python3 cdk.py"
}
```
and the `cdk.py`
```python
import os
from pathlib import Path
from constructs import Construct
from aws_cdk import App, Stack, Environment, Duration, CfnOutput
from aws_cdk import Environment, Stack
from aws_cdk.aws_lambda import DockerImageFunction, DockerImageCode, Architecture, FunctionUrlAuthType
my_environment = Environment(account=os.environ["CDK_DEFAULT_ACCOUNT"], region=os.environ["CDK_DEFAULT_REGION"])
class GradioLambda(Stack):
def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
super().__init__(scope, construct_id, **kwargs)
# create function
lambda_fn = DockerImageFunction(
self,
"GradioApp",
code=DockerImageCode.from_image_asset(str(Path.cwd()), file="Dockerfile"),
architecture=Architecture.X86_64,
memory_size=8192,
timeout=Duration.minutes(2),
)
# add HTTPS url
fn_url = lambda_fn.add_function_url(auth_type=FunctionUrlAuthType.NONE)
CfnOutput(self, "functionUrl", value=fn_url.url)
app = App()
rust_lambda = GradioLambda(app, "GradioLambda", env=my_environment)
app.synth()
```
As you might have seen, we are using the `DockerImageFunction`, meaning that we will deploy a container to AWS Lambda. For this, we need to create our `Dockerfile` , which CDK will then use, build and deploy. The `Dockerfile` will use the `python3.8.12` base image, install our `requirements.txt`, copy our `model/` and `app.py` and run it. Those things seem pretty identical to common container setups. The only “unique” about our `Dockerfile` and being able to use it with AWS Lambda is that we are using the [AWS Lambda Web Adapter](https://github.com/awslabs/aws-lambda-web-adapter#aws-lambda-web-adapter). AWS Lambda Web Adapter allows using regular web services on AWS Lambda.
```python
# Dockerfile
FROM public.ecr.aws/docker/library/python:3.8.12-slim-buster
COPY --from=public.ecr.aws/awsguru/aws-lambda-adapter:0.4.0 /lambda-adapter /opt/extensions/lambda-adapter
WORKDIR /var/task
COPY requirements.txt ./requirements.txt
RUN python -m pip install -r requirements.txt
COPY app.py ./
COPY model/ ./model/
CMD ["python3", "app.py"]
```
Now, we are ready to deploy our application.
```bash
cdk bootstrap
cdk deploy
```
CDK will now build our container, push it to AWS and then deploy our AWS Lambda function. It will also add an [AWS Lambda function URL](https://docs.aws.amazon.com/lambda/latest/dg/lambda-urls.html), which we can access and send requests to our Lambda function. After a few minutes, we should see a console output similar to the one below.
```python
✅ GradioLambda
✨ Deployment time: 359.83s
Outputs:
GradioLambda.functionUrl = https://qyjifuled7ajxv4elkrray7vpm0gpiqi.lambda-url.eu-west-1.on.aws/
```
## 4. Test the serverless Gradio application
We can now open our gradio application with the `functionUrl` from the deployment. This will start our Lambda function, load our transformers model and run our gradio application. The cold start can be between 30-60s, but afterward, the usage and prediction should take < 100ms, and we are only paying for the “compute” time, which is amazing.
![lambda-gradio-application](/static/blog/serverless-gradio/lambda-gradio.png)
## 5. _Optional_: embed the Gradio application into a website
Gradio allows us to integrate our application as a “component” into HTML. Embedding our machine learning application allows us, everyone, to try it out without needing to download or install anything — right in their browser! The best part is that you can embed interactive demos in web applications, mobile applications, and static websites, such as GitHub pages or WordPress.
You can copy the snippet below and replace the `FUNCTION_URL` with your AWS Lambda function, which should work.
```bash
<!DOCTYPE html>
<html lang="en">
<head>
<script type="module" src="https://gradio.s3-us-west-2.amazonaws.com/3.4/gradio.js">
</script>
</head>
<body>
<gradio-app src="FUNCTION_URL"></gradio-app>
</body>
</html>
```
## Conclusion
With the help of [AWS Lambda Web Adapter](https://github.com/awslabs/aws-lambda-web-adapter#aws-lambda-web-adapter) and the Lambda Function URLs, we were able to deploy a Gradio application, which Hugging Face DistilBERT to AWS Lambda for a completely serverless environment.
The biggest issue in the past and still up-to-date are cold starts (time from the start until the application is ready to serve requests). The few seconds of cold-start is reasonably low for using a Transformer model. Additionally, the benefit of using Gradio here is that we have the UI coupled with the model, meaning once the UI is ready, the model is also ready. This allows us to integrate Demos into websites or applications and only pay for the execution time of our lambda function, which is a great way and use for infrequently accessed applications as well as proof-of-concept applications.
Additionally, you can easily implement a “warm keeper”, for our application.
---
Thanks for reading. If you have any questions, contact me via [email](mailto:philipp@huggingface.co) or [forum](https://discuss.huggingface.co/c/inference-endpoints/64). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Accelerate Sentence Transformers with Hugging Face Optimum | https://www.philschmid.de/optimize-sentence-transformers | 2022-08-02 | [
"BERT",
"OnnxRuntime",
"HuggingFace",
"Optimization"
] | Learn how to optimize Sentence Transformers using Hugging Face Optimum. You will learn how dynamically quantize and optimize a Sentence Transformer for ONNX Runtime. | _last update: 2022-11-18_
In this session, you will learn how to optimize [Sentence Transformers](https://huggingface.co/sentence-transformers) using Optimum. The session will show you how to dynamically quantize and optimize a MiniLM [Sentence Transformers](https://huggingface.co/sentence-transformers) model using [Hugging Face Optimum](https://huggingface.co/docs/optimum/index) and [ONNX Runtime](https://onnxruntime.ai/). Hugging Face Optimum is an extension of 🤗 Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on targeted hardware.
_Note: dynamic quantization is currently only supported for CPUs, so we will not be utilizing GPUs / CUDA in this session._
By the end of this session, you see how quantization and optimization with Hugging Face Optimum can result in significant decrease in model latency.
You will learn how to:
1. [Setup Development Environment](#1-setup-development-environment)
2. [Convert a Sentence Transformers model to ONNX and create custom Inference Pipeline](#2-convert-a-sentence-transformers-model-to-onnx-and-create-custom-inference-pipeline)
3. [Apply graph optimization techniques to the ONNX model](#3-apply-graph-optimization-techniques-to-the-onnx-model)
4. [Apply dynamic quantization using `ORTQuantizer` from Optimum](#4-apply-dynamic-quantization-using-ortquantizer-from-optimum)
5. [Test inference with the quantized model](#5-test-inference-with-the-quantized-model)
6. [Evaluate the performance and speed](#6-evaluate-the-performance-and-speed)
Let's get started! 🚀
_This tutorial was created and run on an c6i.xlarge AWS EC2 Instance._
## Quick intro: What are Sentence Transformers
[Sentence Transformers](https://huggingface.co/sentence-transformers) is a Python library for state-of-the-art sentence, text and image embeddings. The initial work is described in our paper [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084).
[Sentence Transformers](https://huggingface.co/sentence-transformers) can be used to compute embeddings for more than 100 languages and to build solutions for semantic textual similar, semantic search, or paraphrase mining.
---
## 1. Setup Development Environment
Our first step is to install Optimum, along with Evaluate and some other libraries. Running the following cell will install all the required packages for us including Transformers, PyTorch, and ONNX Runtime utilities:
```python
!pip install "optimum[onnxruntime]==1.5.0" transformers evaluate mkl-include mkl --upgrade
```
> If you want to run inference on a GPU, you can install 🤗 Optimum with `pip install optimum[onnxruntime-gpu]`.
## 2. Convert a Sentence Transformers model to ONNX and create custom Inference Pipeline
Before we can start qunatizing we need to convert our vanilla `sentence-transformers` model to the `onnx` format. To do this we will use the new [ORTModelForFeatureExtraction](https://huggingface.co/docs/optimum/main/en/onnxruntime/modeling_ort#optimum.onnxruntime.ORTModelForFeatureExtraction) class calling the `from_pretrained()` method with the `from_transformers` attribute. The model we are using is the [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) which maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search and was trained on the [1-billion sentence dataset](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2#training-data).
```python
from optimum.onnxruntime import ORTModelForFeatureExtraction
from transformers import AutoTokenizer
from pathlib import Path
model_id="sentence-transformers/all-MiniLM-L6-v2"
onnx_path = Path("onnx")
# load vanilla transformers and convert to onnx
model = ORTModelForFeatureExtraction.from_pretrained(model_id, from_transformers=True)
tokenizer = AutoTokenizer.from_pretrained(model_id)
# save onnx checkpoint and tokenizer
model.save_pretrained(onnx_path)
tokenizer.save_pretrained(onnx_path)
```
When using `sentence-transformers` natively you can run inference by loading your model in the `SentenceTransformer` class and then calling the `.encode()` method. However this only works with the PyTorch based checkpoints, which we no longer have. To run inference using the Optimum `ORTModelForFeatureExtraction` class, we need to write some methods ourselves. Below we create a `SentenceEmbeddingPipeline` based on ["How to create a custom pipeline?"](https://huggingface.co/docs/transformers/v4.21.0/en/add_new_pipeline) from the Transformers documentation.
```python
from transformers import Pipeline
import torch.nn.functional as F
import torch
# copied from the model card
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
class SentenceEmbeddingPipeline(Pipeline):
def _sanitize_parameters(self, **kwargs):
# we don't have any hyperameters to sanitize
preprocess_kwargs = {}
return preprocess_kwargs, {}, {}
def preprocess(self, inputs):
encoded_inputs = self.tokenizer(inputs, padding=True, truncation=True, return_tensors='pt')
return encoded_inputs
def _forward(self, model_inputs):
outputs = self.model(**model_inputs)
return {"outputs": outputs, "attention_mask": model_inputs["attention_mask"]}
def postprocess(self, model_outputs):
# Perform pooling
sentence_embeddings = mean_pooling(model_outputs["outputs"], model_outputs['attention_mask'])
# Normalize embeddings
sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
return sentence_embeddings
```
We can now initialize our `SentenceEmbeddingPipeline` using our `ORTModelForFeatureExtraction` model and perform inference.
```python
# init pipeline
vanilla_emb = SentenceEmbeddingPipeline(model=model, tokenizer=tokenizer)
# run inference
pred = vanilla_emb("Could you assist me in finding my lost card?")
# print an excerpt from the sentence embedding
print(pred[0][:5])
# tensor([-0.0631, 0.0426, 0.0037, 0.0377, 0.0414])
```
If you want to learn more about exporting transformers model check-out [Convert Transformers to ONNX with Hugging Face Optimum](https://www.philschmid.de/convert-transformers-to-onnx) blog post
## 3. Apply graph optimization techniques to the ONNX model
Graph optimizations are essentially graph-level transformations, ranging from small graph simplifications and node eliminations to more complex node fusions and layout optimizations.
Examples of graph optimizations include:
- **Constant folding**: evaluate constant expressions at compile time instead of runtime
- **Redundant node elimination**: remove redundant nodes without changing graph structure
- **Operator fusion**: merge one node (i.e. operator) into another so they can be executed together
![operator fusion](/static/blog/optimizing-transformers-with-optimum/operator_fusion.png)
If you want to learn more about graph optimization you take a look at the [ONNX Runtime documentation](https://onnxruntime.ai/docs/performance/graph-optimizations.html). We are going to first optimize the model and then dynamically quantize to be able to use transformers specific operators such as QAttention for quantization of attention layers.
To apply graph optimizations to our ONNX model, we will use the `ORTOptimizer()`. The `ORTOptimizer` makes it with the help of a `OptimizationConfig` easy to optimize. The `OptimizationConfig` is the configuration class handling all the ONNX Runtime optimization parameters.
```python
from optimum.onnxruntime import ORTOptimizer
from optimum.onnxruntime.configuration import OptimizationConfig
# create ORTOptimizer and define optimization configuration
optimizer = ORTOptimizer.from_pretrained(model)
optimization_config = OptimizationConfig(optimization_level=99) # enable all optimizations
# apply the optimization configuration to the model
optimizer.optimize(
save_dir=onnx_path,
optimization_config=optimization_config,
)
```
To test performance we can use the ORTModelForSequenceClassification class again and provide an additional `file_name` parameter to load our optimized model. _(This also works for models available on the hub)._
```python
from optimum.onnxruntime import ORTModelForFeatureExtraction
# load optimized model
model = ORTModelForFeatureExtraction.from_pretrained(onnx_path, file_name="model_optimized.onnx")
# create optimized pipeline
optimized_emb = SentenceEmbeddingPipeline(model=model, tokenizer=tokenizer)
pred = optimized_emb("Could you assist me in finding my lost card?")
print(pred[0][:5])
# tensor([-0.0631, 0.0426, 0.0037, 0.0377, 0.0414])
```
## 4. Apply dynamic quantization using `ORTQuantizer` from Optimum
After we have optimized our model we can accelerate it even more by quantizing it using the `ORTQuantizer`. The `ORTQuantizer` can be used to apply dynamic quantization to decrease the size of the model size and accelerate latency and inference.
_We use the `avx512_vnni` config since the instance is powered by an intel ice-lake CPU supporting avx512._
```python
from optimum.onnxruntime import ORTQuantizer
from optimum.onnxruntime.configuration import AutoQuantizationConfig
# create ORTQuantizer and define quantization configuration
dynamic_quantizer = ORTQuantizer.from_pretrained(model)
dqconfig = AutoQuantizationConfig.avx512_vnni(is_static=False, per_channel=False)
# apply the quantization configuration to the model
model_quantized_path = dynamic_quantizer.quantize(
save_dir=onnx_path,
quantization_config=dqconfig,
)
```
Lets quickly check the new model size.
```python
import os
# get model file size
size = os.path.getsize(onnx_path / "model_optimized.onnx")/(1024*1024)
quantized_model = os.path.getsize(onnx_path / "model_optimized_quantized.onnx")/(1024*1024)
print(f"Model file size: {size:.2f} MB")
print(f"Quantized Model file size: {quantized_model:.2f} MB")
# Model file size: 86.66 MB
# Quantized Model file size: 63.47 MB
```
## 5. Test inference with the quantized model
[Optimum](https://huggingface.co/docs/optimum/main/en/pipelines#optimizing-with-ortoptimizer) has built-in support for [transformers pipelines](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#pipelines). This allows us to leverage the same API that we know from using PyTorch and TensorFlow models.
Therefore we can load our quantized model with `ORTModelForSequenceClassification` class and transformers `pipeline`.
```python
from optimum.onnxruntime import ORTModelForFeatureExtraction
from transformers import AutoTokenizer
model = ORTModelForFeatureExtraction.from_pretrained(onnx_path,file_name="model-quantized.onnx")
tokenizer = AutoTokenizer.from_pretrained(onnx_path)
q8_emb = SentenceEmbeddingPipeline(model=model, tokenizer=tokenizer)
pred = q8_emb("Could you assist me in finding my lost card?")
print(pred[0][:5])
# tensor([-0.0567, 0.0111, -0.0110, 0.0450, 0.0447])
```
## 6. Evaluate the performance and speed
As the last step, we want to take a detailed look at the performance and accuracy of our model. Applying optimization techniques, like graph optimizations or mixed-precision not only impact performance (latency) those also might have an impact on the accuracy of the model. So accelerating your model comes with a trade-off.
We are going to evaluate our Sentence Transformers model / Sentence Embeddings on the [Semantic Textual Similarity Benchmark](https://huggingface.co/datasets/glue/viewer/stsb/validation) from the [GLUE](https://huggingface.co/datasets/glue) dataset.
The Semantic Textual Similarity Benchmark (Cer et al., 2017) is a collection of sentence pairs drawn from news headlines, video and image captions, and natural language inference data. Each pair is human-annotated with a similarity score from 1 to 5.
```python
from datasets import load_dataset
from evaluate import load
eval_dataset = load_dataset("glue","stsb",split="validation")
metric = load('glue', 'stsb')
# creating a subset for faster evaluation
# COMMENT IN to run evaluation on a subset of the dataset
# eval_dataset = eval_dataset.select(range(200))
```
We can now leverage the [map](https://huggingface.co/docs/datasets/v2.1.0/en/process#map) function of [datasets](https://huggingface.co/docs/datasets/index) to iterate over the validation set of `stsb` and run prediction for each data point. Therefore we write a `evaluate` helper method which uses our `SentenceEmbeddingsPipeline` and `sentence-transformers` helper methods.
```python
def compute_sentence_similarity(sentence_1, sentence_2,pipeline):
embedding_1 = pipeline(sentence_1)
embedding_2 = pipeline(sentence_2)
# compute cosine similarity between two sentences
return torch.nn.functional.cosine_similarity(embedding_1, embedding_2, dim=1)
def evaluate_stsb(example):
default = compute_sentence_similarity(example["sentence1"], example["sentence2"], vanilla_emb)
quantized = compute_sentence_similarity(example["sentence1"], example["sentence2"], q8_emb)
return {
'reference': (example["label"] - 1) / (5 - 1), # rescale to [0,1]
'default': float(default),
'quantized': float(quantized),
}
# run evaluation
result = eval_dataset.map(evaluate_stsb)
# compute metrics
default_acc = metric.compute(predictions=result["default"], references=result["reference"])
quantized = metric.compute(predictions=result["quantized"], references=result["reference"])
print(f"vanilla model: pearson={default_acc['pearson']}%")
print(f"quantized model: pearson={quantized['pearson']}%")
print(f"The quantized model achieves {round(quantized['pearson']/default_acc['pearson'],2)*100:.2f}% accuracy of the fp32 model")
```
the results are
```bash
vanilla model: pearson=0.8696194595133899%
quantized model: pearson=0.8663752613975557%
The quantized model achieves 100.00% accuracy of the fp32 model
```
Okay, now let's test the performance (latency) of our quantized model. We are going to use a payload with a sequence length of 128 for the benchmark. To keep it simple, we are going to use a python loop and calculate the avg,mean & p95 latency for our vanilla model and for the quantized model.
```python
from time import perf_counter
import numpy as np
payload="Hello, my name is Philipp and I live in Nuremberg, Germany. Currently I am working as a Technical Lead at Hugging Face to democratize artificial intelligence through open source and open science. In the past I designed and implemented cloud-native machine learning architectures for fin-tech and insurance companies. I found my passion for cloud concepts and machine learning 5 years ago. Since then I never stopped learning. Currently, I am focusing myself in the area NLP and how to leverage models like BERT, Roberta, T5, ViT, and GPT2 to generate business value. I cannot wait to see what is next for me"
print(f'Payload sequence length: {len(tokenizer(payload)["input_ids"])}')
def measure_latency(pipe):
latencies = []
# warm up
for _ in range(10):
_ = pipe(payload)
# Timed run
for _ in range(100):
start_time = perf_counter()
_ = pipe(payload)
latency = perf_counter() - start_time
latencies.append(latency)
# Compute run statistics
time_avg_ms = 1000 * np.mean(latencies)
time_std_ms = 1000 * np.std(latencies)
time_p95_ms = 1000 * np.percentile(latencies,95)
return f"P95 latency (ms) - {time_p95_ms}; Average latency (ms) - {time_avg_ms:.2f} +\- {time_std_ms:.2f};", time_p95_ms
vanilla_model=measure_latency(vanilla_emb)
quantized_model=measure_latency(q8_emb)
print(f"Vanilla model: {vanilla_model[0]}")
print(f"Quantized model: {quantized_model[0]}")
print(f"Improvement through quantization: {round(vanilla_model[1]/quantized_model[1],2)}x")
```
the results are
```bash
Payload sequence length: 128
Vanilla model: P95 latency (ms) - 25.639022301038494; Average latency (ms) - 19.75 +\- 2.72;
Quantized model: P95 latency (ms) - 12.289083890937036; Average latency (ms) - 11.76 +\- 0.37;
Improvement through quantization: 2.09x
```
We managed to accelerate our model latency from 25.6ms to 12.3ms or 2.09x while keeping 100% of the accuracy on the `stsb` dataset.
![performance](/static/blog/optimize-sentence-transformers/sentence-transfomeres-performance.png)
## Conclusion
We successfully quantized our vanilla Transformers model with Hugging Face and managed to accelerate our model latency from 25.6ms to 12.3ms or 2.09x while keeping 100% of the accuracy on the `stsb` dataset.
But I have to say that this isn't a plug and play process you can transfer to any Transformers model, task or dataset. |
Hugging Face Transformers with Keras: Fine-tune a non-English BERT for Named Entity Recognition | https://www.philschmid.de/huggingface-transformers-keras-tf | 2021-12-21 | [
"HuggingFace",
"Keras",
"BERT",
"Tensorflow"
] | Learn how to fine-tune a non-English BERT using Hugging Face Transformers and Keras/TF, Transformers, datasets. | Welcome to this end-to-end Named Entity Recognition example using Keras. In this tutorial, we will use the Hugging Faces `transformers` and `datasets` library together with `Tensorflow` & `Keras` to fine-tune a pre-trained non-English transformer for token-classification (ner).
If you want a more detailed example for token-classification you should check out this [notebook](https://github.com/huggingface/notebooks/blob/master/examples/token_classification-tf.ipynb) or the [chapter 7](https://huggingface.co/course/chapter7/2?fw=pt) of the [Hugging Face Course](https://huggingface.co/course/chapter7/2?fw=pt).
## Installation
```python
#!pip install "tensorflow==2.6.0"
!pip install transformers datasets seqeval tensorboard --upgrade
```
```python
!sudo apt-get install git-lfs
```
This example will use the [Hugging Face Hub](https://huggingface.co/models) as remote model versioning service. To be able to push our model to the Hub, you need to register on the [Hugging Face](https://huggingface.co/join).
If you already have an account you can skip this step.
After you have an account, we will use the `notebook_login` util from the `huggingface_hub` package to log into our account and store our token (access key) on the disk.
```python
from huggingface_hub import notebook_login
notebook_login()
```
## Setup & Configuration
In this step we will define global configurations and paramters, which are used across the whole end-to-end fine-tuning proccess, e.g. `tokenizer` and `model` we will use.
In this example are we going to fine-tune the [deepset/gbert-base](https://huggingface.co/deepset/gbert-base) a German BERT model.
```python
model_id = "deepset/gbert-base"
```
You can change the `model_id` to another BERT-like model for a different language, e.g. **Italian** or **French** to use this script to train a French or Italian Named Entity Recognition Model. But don't forget to also adjust the dataset in the next step.
## Dataset & Pre-processing
As Dataset we will use the [GermanNER](https://huggingface.co/datasets/germaner) a german named entity recognition dataset from [GermaNER: Free Open German Named Entity Recognition Tool](https://www.inf.uni-hamburg.de/en/inst/ab/lt/publications/2015-benikovaetal-gscl2015-germa.pdf) paper. The dataset contains the four default coarse
named entity classes LOCation, PERson, ORGanisation, and OTHer from the [GermEval 2014 task](https://sites.google.com/site/germeval2014ner/). If you are fine-tuning in a different language then German you can search on the [Hub](https://huggingface.co/datasets?task_ids=task_ids:named-entity-recognition&sort=downloads) for a dataset for your language or you can take a look at [Datasets for Entity Recognition](https://github.com/juand-r/entity-recognition-datasets)
```python
dataset_id="germaner"
seed=33
```
To load the `germaner` dataset, we use the `load_dataset()` method from the 🤗 Datasets library.
```python
from datasets import load_dataset
dataset = load_dataset(dataset_id)
```
We can display all our NER classes by inspecting the features of our dataset. Those `ner_labels` will be later used to create a user friendly output after we fine-tuned our model.
```python
# accessing the "train" split for the "ner_tags" feature
ner_labels = dataset["train"].features["ner_tags"].feature.names
# ['B-LOC', 'B-ORG', 'B-OTH', 'B-PER', 'I-LOC', 'I-ORG', 'I-OTH', 'I-PER', 'O']
```
### Pre-processing & Tokenization
To train our model we need to convert our "Natural Language" to token IDs. This is done by a 🤗 Transformers Tokenizer which will tokenize the inputs (including converting the tokens to their corresponding IDs in the pretrained vocabulary). If you are not sure what this means check out [chapter 6](https://huggingface.co/course/chapter6/1?fw=tf) of the Hugging Face Course.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id)
```
Compared to a text-classification dataset of question-answering dataset is "text" of the `germaner` already split into a list of words (`tokens`). So cannot use `tokenzier(text)` we need to pass `is_split_into_words=True` to the `tokenizer` method. Additionally we add the `truncation=True` to truncate texts that are bigger than the maximum size allowed by the model.
```python
def tokenize_and_align_labels(examples):
tokenized_inputs = tokenizer(
examples["tokens"], truncation=True, is_split_into_words=True
)
labels = []
for i, label in enumerate(examples[f"ner_tags"]):
# get a list of tokens their connecting word id (for words tokenized into multiple chunks)
word_ids = tokenized_inputs.word_ids(batch_index=i)
previous_word_idx = None
label_ids = []
for word_idx in word_ids:
# Special tokens have a word id that is None. We set the label to -100 so they are automatically
# ignored in the loss function.
if word_idx is None:
label_ids.append(-100)
# We set the label for the first token of each word.
elif word_idx != previous_word_idx:
label_ids.append(label[word_idx])
# For the other tokens in a word, we set the label to the current
else:
label_ids.append(label[word_idx])
previous_word_idx = word_idx
labels.append(label_ids)
tokenized_inputs["labels"] = labels
return tokenized_inputs
```
process our dataset using `.map` method with `batched=True`.
```python
tokenized_datasets = dataset.map(tokenize_and_align_labels, batched=True)
```
Since we later only need the tokenized + labels columns for the model to train, we are just filtering out which columns have been added by processing the dataset. The `tokenizer_columns` are the dataset column(s) to load in the `tf.data.Dataset`
```python
pre_tokenizer_columns = set(dataset["train"].features)
tokenizer_columns = list(set(tokenized_datasets["train"].features) - pre_tokenizer_columns)
# ['attention_mask', 'labels', 'token_type_ids', 'input_ids']
```
Since our dataset only includes one split (`train`) we need to `train_test_split` ourself to have an evaluation/test dataset for evaluating the result during and after training.
```python
# test size will be 15% of train dataset
test_size=.15
processed_dataset = tokenized_datasets["train"].shuffle(seed=seed).train_test_split(test_size=test_size)
processed_dataset
```
---
## Fine-tuning the model using `Keras`
Now that our `dataset` is processed, we can download the pretrained model and fine-tune it. But before we can do this we need to convert our Hugging Face `datasets` Dataset into a `tf.data.Dataset`. For this we will us the `.to_tf_dataset` method and a `data collator` for token-classification (Data collators are objects that will form a batch by using a list of dataset elements as input).
### Hyperparameter
```python
from huggingface_hub import HfFolder
import tensorflow as tf
id2label = {str(i): label for i, label in enumerate(ner_labels)}
label2id = {v: k for k, v in id2label.items()}
num_train_epochs = 5
train_batch_size = 16
eval_batch_size = 32
learning_rate = 2e-5
weight_decay_rate=0.01
num_warmup_steps=0
output_dir=model_id.split("/")[1]
hub_token = HfFolder.get_token() # or your token directly "hf_xxx"
hub_model_id = f'{model_id.split("/")[1]}-{dataset_id}'
fp16=True
# Train in mixed-precision float16
# Comment this line out if you're using a GPU that will not benefit from this
if fp16:
tf.keras.mixed_precision.set_global_policy("mixed_float16")
```
### Converting the dataset to a `tf.data.Dataset`
```python
from transformers import DataCollatorForTokenClassification
# Data collator that will dynamically pad the inputs received, as well as the labels.
data_collator = DataCollatorForTokenClassification(
tokenizer=tokenizer, return_tensors="tf"
)
# converting our train dataset to tf.data.Dataset
tf_train_dataset = processed_dataset["train"].to_tf_dataset(
columns= tokenizer_columns,
shuffle=False,
batch_size=train_batch_size,
collate_fn=data_collator,
)
# converting our test dataset to tf.data.Dataset
tf_eval_dataset = processed_dataset["test"].to_tf_dataset(
columns=tokenizer_columns,
shuffle=False,
batch_size=eval_batch_size,
collate_fn=data_collator,
)
```
### Download the pretrained transformer model and fine-tune it.
```python
from transformers import TFAutoModelForTokenClassification, create_optimizer
num_train_steps = len(tf_train_dataset) * num_train_epochs
optimizer, lr_schedule = create_optimizer(
init_lr=learning_rate,
num_train_steps=num_train_steps,
weight_decay_rate=weight_decay_rate,
num_warmup_steps=num_warmup_steps,
)
model = TFAutoModelForTokenClassification.from_pretrained(
model_id,
id2label=id2label,
label2id=label2id,
)
model.compile(optimizer=optimizer)
```
### Callbacks
As mentioned in the beginning we want to use the [Hugging Face Hub](https://huggingface.co/models) for model versioning and monitoring. Therefore we want to push our models weights, during training and after training to the Hub to version it.
Additionally we want to track the peformance during training therefore we will push the `Tensorboard` logs along with the weights to the Hub to use the "Training Metrics" Feature to monitor our training in real-time.
```python
import os
from transformers.keras_callbacks import PushToHubCallback
from tensorflow.keras.callbacks import TensorBoard as TensorboardCallback
callbacks=[]
callbacks.append(TensorboardCallback(log_dir=os.path.join(output_dir,"logs")))
if hub_token:
callbacks.append(PushToHubCallback(output_dir=output_dir,
tokenizer=tokenizer,
hub_model_id=hub_model_id,
hub_token=hub_token))
```
![tensorboard](/static/blog/huggingface-transformers-keras-tf/tensorboard.png)
## Training
Start training with calling `model.fit`
```python
model.fit(
tf_train_dataset,
validation_data=tf_eval_dataset,
callbacks=callbacks,
epochs=num_train_epochs,
)
```
## Evaluation
The traditional framework used to evaluate token classification prediction is `seqeval`. This metric does not behave like the standard accuracy: it will actually take the lists of labels as strings, not integers, so we will need to fully decode the predictions and labels before passing them to the metric.
```python
from datasets import load_metric
import numpy as np
metric = load_metric("seqeval")
def evaluate(model, dataset, ner_labels):
all_predictions = []
all_labels = []
for batch in dataset:
logits = model.predict(batch)["logits"]
labels = batch["labels"]
predictions = np.argmax(logits, axis=-1)
for prediction, label in zip(predictions, labels):
for predicted_idx, label_idx in zip(prediction, label):
if label_idx == -100:
continue
all_predictions.append(ner_labels[predicted_idx])
all_labels.append(ner_labels[label_idx])
return metric.compute(predictions=[all_predictions], references=[all_labels])
results = evaluate(model, tf_eval_dataset, ner_labels=list(model.config.id2label.values()))
```
```json
{
"LOC": {
"precision": 0.8931558935361217,
"recall": 0.9115250291036089,
"f1": 0.9022469752256578,
"number": 2577
},
"ORG": {
"precision": 0.7752112676056339,
"recall": 0.8075117370892019,
"f1": 0.7910319057200345,
"number": 1704
},
"OTH": {
"precision": 0.6788389513108615,
"recall": 0.7308467741935484,
"f1": 0.703883495145631,
"number": 992
},
"PER": {
"precision": 0.9384366140137708,
"recall": 0.9430199430199431,
"f1": 0.9407226958993098,
"number": 2457
},
"overall_precision": 0.8520523797532108,
"overall_recall": 0.8754204398447607,
"overall_f1": 0.8635783563042368,
"overall_accuracy": 0.976147969774973
}
```
---
## Create Model Card with evaluation results
To complete our Hugging Face Hub repository we will create a model card with the used hyperparameters and the evaluation results.
```python
from transformers.modelcard import TrainingSummary
eval_results = {
"precision":float(results["overall_precision"]),
"recall":float(results["overall_recall"]),
"f1":float(results["overall_f1"]),
"accuracy":float(results["overall_accuracy"]),
}
training_summary = TrainingSummary(
model_name = hub_model_id,
language = "de",
tags=[],
finetuned_from=model_id,
tasks="token-classification",
dataset=dataset_id,
dataset_tags=dataset_id,
dataset_args="default",
eval_results=eval_results,
hyperparameters={
"num_train_epochs": num_train_epochs,
"train_batch_size": train_batch_size,
"eval_batch_size": eval_batch_size,
"learning_rate": learning_rate,
"weight_decay_rate": weight_decay_rate,
"num_warmup_steps": num_warmup_steps,
"fp16": fp16
}
)
model_card = training_summary.to_model_card()
model_card_path = os.path.join(output_dir, "README.md")
with open(model_card_path, "w") as f:
f.write(model_card)
```
push model card to repository
```python
from huggingface_hub import HfApi
api = HfApi()
user = api.whoami(hub_token)
api.upload_file(
token=hub_token,
repo_id=f"{user['name']}/{hub_model_id}",
path_or_fileobj=model_card_path,
path_in_repo="README.md",
)
```
![model-card](/static/blog/huggingface-transformers-keras-tf/model-card.png)
---
# Run Managed Training using Amazon Sagemaker
If you want to run this examples on Amazon SageMaker to benefit from the Training Platform follow the cells below. I converted the Notebook into a python script [train.py](./scripts/train.py), which accepts same hyperparameter and can we run on SageMaker using the `HuggingFace` estimator
```python
#!pip install sagemaker
```
```python
import sagemaker
sess = sagemaker.Session()
# sagemaker session bucket -> used for uploading data, models and logs
# sagemaker will automatically create this bucket if it not exists
sagemaker_session_bucket=None
if sagemaker_session_bucket is None and sess is not None:
# set to default bucket if a bucket name is not given
sagemaker_session_bucket = sess.default_bucket()
role = sagemaker.get_execution_role()
sess = sagemaker.Session(default_bucket=sagemaker_session_bucket)
print(f"sagemaker role arn: {role}")
print(f"sagemaker bucket: {sess.default_bucket()}")
print(f"sagemaker session region: {sess.boto_region_name}")
```
```python
from sagemaker.huggingface import HuggingFace
# gets role for executing training job
role = sagemaker.get_execution_role()
hyperparameters = {
'model_id': 'deepset/gbert-base',
'dataset_id': 'germaner',
'num_train_epochs': 5,
'train_batch_size': 16,
'eval_batch_size': 32,
'learning_rate': 2e-5,
'weight_decay_rate': 0.01,
'num_warmup_steps': 0,
'hub_token': HfFolder.get_token(),
'hub_model_id': 'sagemaker-gbert-base-germaner',
'fp16': True
}
# creates Hugging Face estimator
huggingface_estimator = HuggingFace(
entry_point='train.py',
source_dir='./scripts',
instance_type='ml.p3.2xlarge',
instance_count=1,
role=role,
transformers_version='4.12.3',
tensorflow_version='2.5.1',
py_version='py36',
hyperparameters = hyperparameters
)
# starting the train job
huggingface_estimator.fit()
```
## Conclusion
We managed to successfully fine-tune a German BERT model using Transformers and Keras, without any heavy lifting or complex and unnecessary boilerplate code. The new utilities like `.to_tf_dataset` are improving the developer experience of the Hugging Face ecosystem to become more Keras and TensorFlow friendly. Combining those new features with the Hugging Face Hub we get a fully-managed MLOps pipeline for model-versioning and experiment management using Keras callback API.
Big Thanks to [Matt](https://twitter.com/carrigmat) for all the work he is doing to improve the experience using Transformers and Keras.
Now its your turn! Adjust the notebook to train a BERT for another language like French, Spanish or Italian. 🇫🇷 🇪🇸 🇮🇹
---
You can find the code [here](https://github.com/philschmid/transformers-keras-e2e-ner) and feel free to open a thread on the [forum](https://discuss.huggingface.co/).
Thanks for reading. If you have any questions, feel free to contact me, through [Github](https://github.com/huggingface/transformers), or on the [forum](https://discuss.huggingface.co/c/sagemaker/17). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Serverless BERT with HuggingFace, AWS Lambda, and Docker | https://www.philschmid.de/serverless-bert-with-huggingface-aws-lambda-docker | 2020-12-06 | [
"AWS",
"Serverless",
"BERT"
] | Learn how to use the newest cutting edge computing power of AWS with the benefits of serverless architectures to leverage Google's "State-of-the-Art" NLP Model. | It's the most wonderful time of the year. Of course, I'm not talking about Christmas but re:Invent. It is re:Invent
time.
![reinvent](/static/blog/serverless-bert-with-huggingface-aws-lambda-docker/reinvent.png)
photo from the keynote by Andy Jassy, rights belong to Amazon
In the opening keynote, Andy Jassy presented the AWS Lambda Container Support, which allows us to use custom container
(docker) images up to 10GB as a runtime for AWS Lambda. With that, we can build runtimes larger than the previous 250 MB
limit, be it for "State-of-the-Art" NLP APIs with BERT or complex processing.
Furthermore, you can now configure AWS Lambda functions with up to
[10 GB of Memory and 6 vCPUs](https://aws.amazon.com/de/blogs/aws/new-for-aws-lambda-functions-with-up-to-10-gb-of-memory-and-6-vcpus/?nc1=b_rp).
For those who are not that familiar with BERT was published in 2018 by Google and stands for
Bidirectional Encoder Representations from Transformers and is designed to learn word representations or embeddings from
an unlabeled text by jointly conditioning on both left and right context. Transformers are since that the
"State-of-the-Art" Architecture in NLP.
![bert-context](/static/blog/serverless-bert-with-huggingface-aws-lambda-docker/bert-context.png)
Google Search started using BERT end of 2019 in
[1 out of 10](https://www.blog.google/products/search/search-language-understanding-bert/) English searches, since then
the usage of BERT in Google Search increased to almost
[100% of English-based queries](https://searchon.withgoogle.com/). But that's not it. Google powers now over
[70 languages with BERT for Google Search](https://twitter.com/searchliaison/status/1204152378292867074).
![google-search](/static/blog/serverless-bert-with-huggingface-aws-lambda-docker/google-search.png)
[https://youtu.be/ZL5x3ovujiM?t=484](https://youtu.be/ZL5x3ovujiM?t=484)
We are going to use the newest cutting edge computing power of AWS with the benefits of serverless architectures to
leverage Google's "State-of-the-Art" NLP Model.
We deploy a BERT Question-Answering API in a serverless AWS Lambda environment. Therefore we use the
[Transformers](https://github.com/huggingface/transformers) library by HuggingFace,
the [Serverless Framework](https://serverless.com/), AWS Lambda, and Amazon ECR.
Before we start i wanted to encourage you to read my blog [philschmid.de](https://www.philschmi.de) where i have already
wrote several blog post about [Serverless](https://www.philschmid.de/aws-lambda-with-custom-docker-image) or
[How to fine-tune BERT models](https://www.philschmid.de/bert-text-classification-in-a-different-language).
You find the complete code for it in this
[Github repository](https://github.com/philschmid/serverless-bert-huggingface-aws-lambda-docker).
---
## Transformers Library by Huggingface
The [Transformers library](https://github.com/huggingface/transformers) provides state-of-the-art machine learning
architectures like BERT, GPT-2, RoBERTa, XLM, DistilBert, XLNet, T5 for Natural Language Understanding (NLU) and Natural
Language Generation (NLG). It also provides thousands of pre-trained models in 100+ different languages.
## AWS Lambda
[AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a serverless computing service that lets you
run code without managing servers. It executes your code only when required and scales automatically, from a few
requests per day to thousands per second.
## Amazon Elastic Container Registry
[Amazon Elastic Container Registry (ECR)](https://aws.amazon.com/ecr/?nc1=h_ls) is a fully managed container registry.
It allows us to store, manage, share docker container images. You can share docker containers privately within your
organization or publicly worldwide for anyone.
## Serverless Framework
[The Serverless Framework](https://www.serverless.com/) helps us develop and deploy AWS Lambda functions. It’s a CLI
that offers structure, automation, and best practices right out of the box.
---
## The Architecture
![architecture](/static/blog/serverless-bert-with-huggingface-aws-lambda-docker/architecture.png)
---
## Tutorial
Before we get started, make sure you have the [Serverless Framework](https://serverless.com/) configured and set up. You
also need a working `docker` environment. We use `docker` to create our own custom image including all needed `Python`
dependencies and our `BERT` model, which we then use in our AWS Lambda function. Furthermore, you need access to an AWS
Account to create an IAM User, an ECR Registry, an API Gateway, and the AWS Lambda function.
We design the API like that we send a context (small paragraph) and a question to it and respond with the answer to the
question.
```python
context = """We introduce a new language representation model called BERT, which stands for
Bidirectional Encoder Representations from Transformers. Unlike recent language
representation models (Peters et al., 2018a; Radford et al., 2018), BERT is
designed to pretrain deep bidirectional representations from unlabeled text by
jointly conditioning on both left and right context in all layers. As a result,
the pre-trained BERT model can be finetuned with just one additional output
layer to create state-of-the-art models for a wide range of tasks, such as
question answering and language inference, without substantial taskspecific
architecture modifications. BERT is conceptually simple and empirically
powerful. It obtains new state-of-the-art results on eleven natural language
processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute
improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1
question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD
v2.0 Test F1 to 83.1 (5.1 point absolute improvement)."""
question_one = "What is BERTs best score on Squadv2 ?"
# 83 . 1
question_two = "What does the 'B' in BERT stand for?"
# 'bidirectional encoder representations from transformers'
```
**What are we going to do:**
- create a `Python` Lambda function with the Serverless Framework.
- add the `BERT`model to our function and create an inference pipeline.
- Create a custom `docker` image
- Test our function locally with LRIE
- Deploy a custom `docker` image to ECR
- Deploy AWS Lambda function with a custom `docker` image
- Test our Serverless `BERT` API
You can find the complete code in this
[Github repository](https://github.com/philschmid/serverless-bert-huggingface-aws-lambda-docker).
---
## Create a `Python` Lambda function with the Serverless Framework.
First, we create our AWS Lambda function by using the Serverless CLI with the `aws-python3` template.
```bash
serverless create --template aws-python3 --path serverless-bert
```
This CLI command will create a new directory containing a `handler.py`, `.gitignore`, and `serverless.yaml` file. The
`handler.py` contains some basic boilerplate code.
```python
import json
def hello(event, context):
body = {
"message": "Go Serverless v1.0! Your function executed successfully!",
"input": event
}
response = {
"statusCode": 200,
"body": json.dumps(body)
}
return response
```
---
## Add the `BERT`model to our function and create an inference pipeline.
To add our `BERT` model to our function we have to load it from the
[model hub of HuggingFace](https://huggingface.co/models). For this, I have created a python script. Before we can
execute this script we have to install the `transformers` library to our local environment and create a `model`
directory in our `serverless-bert/` directory.
```yaml
mkdir model & pip3 install torch==1.5.0 transformers==3.4.0
```
After we installed `transformers` we create `get_model.py` file in the `function/` directory and include the script
below.
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
def get_model(model):
"""Loads model from Hugginface model hub"""
try:
model = AutoModelForQuestionAnswering.from_pretrained(model,use_cdn=True)
model.save_pretrained('./model')
except Exception as e:
raise(e)
def get_tokenizer(tokenizer):
"""Loads tokenizer from Hugginface model hub"""
try:
tokenizer = AutoTokenizer.from_pretrained(tokenizer)
tokenizer.save_pretrained('./model')
except Exception as e:
raise(e)
get_model('mrm8488/mobilebert-uncased-finetuned-squadv2')
get_tokenizer('mrm8488/mobilebert-uncased-finetuned-squadv2')
```
To execute the script we run `python3 get_model.py` in the `serverless-bert/` directory.
```python
python3 get_model.py
```
_**Tip**: add the `model` directory to gitignore._
The next step is to adjust our `handler.py` and include our `serverless_pipeline()`, which initializes our model and
tokenizer and returns a `predict` function, we can use in our `handler`.
```python
import json
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, AutoConfig
def encode(tokenizer, question, context):
"""encodes the question and context with a given tokenizer"""
encoded = tokenizer.encode_plus(question, context)
return encoded["input_ids"], encoded["attention_mask"]
def decode(tokenizer, token):
"""decodes the tokens to the answer with a given tokenizer"""
answer_tokens = tokenizer.convert_ids_to_tokens(
token, skip_special_tokens=True)
return tokenizer.convert_tokens_to_string(answer_tokens)
def serverless_pipeline(model_path='./model'):
"""Initializes the model and tokenzier and returns a predict function that ca be used as pipeline"""
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForQuestionAnswering.from_pretrained(model_path)
def predict(question, context):
"""predicts the answer on an given question and context. Uses encode and decode method from above"""
input_ids, attention_mask = encode(tokenizer,question, context)
start_scores, end_scores = model(torch.tensor(
[input_ids]), attention_mask=torch.tensor([attention_mask]))
ans_tokens = input_ids[torch.argmax(
start_scores): torch.argmax(end_scores)+1]
answer = decode(tokenizer,ans_tokens)
return answer
return predict
# initializes the pipeline
question_answering_pipeline = serverless_pipeline()
def handler(event, context):
try:
# loads the incoming event into a dictonary
body = json.loads(event['body'])
# uses the pipeline to predict the answer
answer = question_answering_pipeline(question=body['question'], context=body['context'])
return {
"statusCode": 200,
"headers": {
'Content-Type': 'application/json',
'Access-Control-Allow-Origin': '*',
"Access-Control-Allow-Credentials": True
},
"body": json.dumps({'answer': answer})
}
except Exception as e:
print(repr(e))
return {
"statusCode": 500,
"headers": {
'Content-Type': 'application/json',
'Access-Control-Allow-Origin': '*',
"Access-Control-Allow-Credentials": True
},
"body": json.dumps({"error": repr(e)})
}
```
---
## Create a custom `docker` image
Before we can create our `docker` we need to create a `requirements.txt` file with all the dependencies we want to
install in our `docker.`
We are going to use a lighter Pytorch Version and the transformers library.
```bash
https://download.pytorch.org/whl/cpu/torch-1.5.0%2Bcpu-cp38-cp38-linux_x86_64.whl
transformers==3.4.0
```
To containerize our Lambda Function, we create a `dockerfile` in the same directory and copy the following content.
```bash
FROM public.ecr.aws/lambda/python:3.8
# Copy function code and models into our /var/task
COPY ./ ${LAMBDA_TASK_ROOT}/
# install our dependencies
RUN python3 -m pip install -r requirements.txt --target ${LAMBDA_TASK_ROOT}
# Set the CMD to your handler (could also be done as a parameter override outside of the Dockerfile)
CMD [ "handler.handler" ]
```
Additionally we can add a `.dockerignore` file to exclude files from your container image.
```bash
README.md
*.pyc
*.pyo
*.pyd
__pycache__
.pytest_cache
serverless.yaml
get_model.py
```
To build our custom `docker` image we run.
```bash
docker build -t bert-lambda .
```
---
## Test our function locally
AWS also released the [Lambda Runtime Interface Emulator](https://github.com/aws/aws-lambda-runtime-interface-emulator/)
that enables us to perform local testing of the container image and check that it will run when deployed to Lambda.
We can start our `docker` by running.
```bash
docker run -p 8080:8080 bert-lambda
```
Afterwards, in a separate terminal, we can then locally invoke the function using `curl` or a REST-Client.
```bash
curl --request POST \
--url http://localhost:8080/2015-03-31/functions/function/invocations \
--header 'Content-Type: application/json' \
--data '{"body":"{\"context\":\"We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models (Peters et al., 2018a; Radford et al., 2018), BERT is designed to pretrain deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be finetuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial taskspecific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).\",\n\"question\":\"What is the GLUE score for Bert?\"\n}"}'
# {"statusCode": 200, "headers": {"Content-Type": "application/json", "Access-Control-Allow-Origin": "*", "Access-Control-Allow-Credentials": true}, "body": "{\"answer\": \"80 . 5 %\"}"}%
```
_Beware we have to `stringify` our body since we passing it directly into the function (only for testing)._
![local-request](/static/blog/serverless-bert-with-huggingface-aws-lambda-docker/local-request.png)
---
## Deploy a custom `docker` image to ECR
Since we now have a local `docker` image we can deploy this to ECR. Therefore we need to create an ECR repository with
the name `bert-lambda`.
```bash
aws ecr create-repository --repository-name bert-lambda > /dev/null
```
To be able to push our images we need to login to ECR. We are using the `aws` CLI v2.x. Therefore we need to define some
environment variables to make deploying easier.
```bash
aws_region=eu-central-1
aws_account_id=891511646143
aws ecr get-login-password \
--region $aws_region \
| docker login \
--username AWS \
--password-stdin $aws_account_id.dkr.ecr.$aws_region.amazonaws.com
```
Next we need to `tag` / rename our previously created image to an ECR format. The format for this is
`{AccountID}.dkr.ecr.{region}.amazonaws.com/{repository-name}`
```bash
docker tag bert-lambda $aws_account_id.dkr.ecr.$aws_region.amazonaws.com/bert-lambda
```
To check if it worked we can run `docker images` and should see an image with our tag as name
![docker-image](/static/blog/serverless-bert-with-huggingface-aws-lambda-docker/docker-image.png)
Finally, we push the image to ECR Registry.
```bash
docker push $aws_account_id.dkr.ecr.$aws_region.amazonaws.com/bert-lambda
```
---
## Deploy AWS Lambda function with a custom `docker` image
I provide the complete `serverless.yaml` for this example, but we go through all the details we need for our `docker`
image and leave out all standard configurations. If you want to learn more about the `serverless.yaml`, I suggest you
check out
[Scaling Machine Learning from ZERO to HERO](https://www.philschmid.de/scaling-machine-learning-from-zero-to-hero). In
this article, I went through each configuration and explain the usage of them.
```yaml
service: serverless-bert-lambda-docker
provider:
name: aws # provider
region: eu-central-1 # aws region
memorySize: 5120 # optional, in MB, default is 1024
timeout: 30 # optional, in seconds, default is 6
functions:
questionanswering:
image: 891511646143.dkr.ecr.eu-central-1.amazonaws.com/bert-lambda:latest #ecr url
events:
- http:
path: qa # http path
method: post # http method
```
To use a `docker` image in our `serverlss.yaml` we have to `image` and in our `function` section. The `image` has the
URL to our `docker` image also value.
For an ECR image, the URL should look like this `{AccountID}.dkr.ecr.{region}.amazonaws.com/{repository-name}@{digest}`
In order to deploy the function, we run `serverless deploy`.
```yaml
serverless deploy
```
After this process is done we should see something like this.
![serverless-deploy](/static/blog/serverless-bert-with-huggingface-aws-lambda-docker/serverless-deploy.png)
---
## Test our Serverless `BERT` API
To test our Lambda function we can use Insomnia, Postman, or any other REST client. Just add a JSON with a `context` and
a `question` to the body of your request. Let´s try it with our example from the colab notebook.
```json
{
"context": "We introduce a new language representation model called BERT, which stands for idirectional Encoder Representations from Transformers. Unlike recent language epresentation models (Peters et al., 2018a; Radford et al., 2018), BERT is designed to pretrain deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be finetuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial taskspecific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).",
"question": "What is BERTs best score on Squadv2 ?"
}
```
Our `serverless_pipeline()` answered our question correctly with `83.1`.
![insomnia](/static/blog/serverless-bert-with-huggingface-aws-lambda-docker/insomnia.png)
The first request after we deployed our `docker` based Lambda function took 27,8s. The reason is that AWS apparently
saves the `docker` container somewhere on the first initial call to provide it suitably.
I waited extra more than 15 minutes and tested it again. The cold start now took 6,7s and a warm request around 220ms
---
## Conclusion
The release of the AWS Lambda Container Support enables much wider use of AWS Lambda and Serverless. It fixes many
existing problems and gives us greater scope for the deployment of serverless applications.
We were able to deploy a "State-of-the-Art" NLP model without the need to manage any server. It will automatically scale
up to thousands of parallel requests without any worries. The increase of configurable Memory and vCPUs boosts this cold
start even more.
The future looks more than golden for AWS Lambda and Serverless.
---
You can find the [GitHub repository](https://github.com/philschmid/serverless-bert-huggingface-aws-lambda-docker) with
the complete code [here](https://github.com/philschmid/serverless-bert-huggingface-aws-lambda-docker).
Thanks for reading. If you have any questions, feel free to contact me or comment on this article. You can also connect
with me on [Twitter](https://twitter.com/_philschmid) or
[LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
New Serverless Transformers using Amazon SageMaker Serverless Inference and Hugging Face | https://www.philschmid.de/serverless-transformers-sagemaker-huggingface | 2021-12-15 | [
"AWS",
"BERT",
"Serverless",
"Sagemaker"
] | Learn how to deploy Hugging Face Transformers serverless using the new Amazon SageMaker Serverless Inference. | Last week at re:Invent 2021, AWS announced several new features for [Amazon SageMaker](https://aws.amazon.com/sagemaker/) to improve the Machine Learning experience using AWS and Amazon SageMaker. Amongst all the new announcements during [Swami Sivasubramanian](https://www.linkedin.com/in/swaminathansivasubramanian) Machine Learning Keynote, were three special ones, at least for me.
The new Amazon [SageMaker Training Compiler](https://aws.amazon.com/blogs/aws/new-introducing-sagemaker-training-compiler/), which optimizes Deep Learning models to accelerate training by more efficiently using GPU instances and allowing higher batch sizes. You can check out my blog post ["Hugging Face Transformers BERT fine-tuning using Amazon SageMaker and Training Compiler"](https://www.philschmid.de/huggingface-amazon-sagemaker-training-compiler) to learn how to use it.
The [Amazon SageMaker Studio Lab](https://aws.amazon.com/sagemaker/studio-lab/) is a free machine learning (ML) development environment based on top of jupyter similar to [Google Colab - Colaboratory](https://colab.research.google.com/notebooks/welcome.ipynb?hl=de), which includes free CPU or GPU sessions for your projects.
Last but not least [Amazon SageMaker Serverless Inference](https://docs.aws.amazon.com/sagemaker/latest/dg/serverless-endpoints.html) which we will have a closer look at and learn how to use in this blog. Amazon SageMaker Serverless Inference as the name suggests is a fully managed serverless inference solution with pay-per-use pricing built on top of [AWS Lambda](https://aws.amazon.com/lambda/?nc1=h_ls).
Serverless Machine Learning especially Serverless Deep Learning is a topic, which occupies me already for a long time. I created several blog posts on how to use Transformers, BERT, or PyTorch in Serverless environments with ["Serverless BERT with HuggingFace, AWS Lambda, and Docker"](https://www.philschmid.de/serverless-bert-with-huggingface-aws-lambda-docker), ["New Serverless Bert with Huggingface, AWS Lambda, and AWS EFS"](https://www.philschmid.de/new-serverless-bert-with-huggingface-aws-lambda) or ["Scaling Machine Learning from ZERO to HERO"](https://www.philschmid.de/scaling-machine-learning-from-zero-to-hero).
The new Amazon SageMaker Serverless Inference feature strongly simplifies hosting for Serverless Transformers or Deep Learning Models a lot, but with still has some limitations.
In this blog post, I will show you how you can easily deploy any [Hugging Face Transformer](https://huggingface.co/models) compatible model using Amazon SageMaker Serverless Inference and the [Hugging Face Inference DLCs](https://huggingface.co/docs/sagemaker/main) to quickly build cost-effective Proof of Concepts for your machine learning applications. We will take a look at the current existing limitations for it.
You can find the code for it in this [Github Repository](https://github.com/philschmid/cdk-samples/tree/master/sagemaker-serverless-huggingface-endpoint).
---
## Amazon SageMaker Serverless Inference
Amazon SageMaker Serverless Inference is a fully managed serverless inference option that makes it easy for you to deploy and scale ML models built on top of [AWS Lambda](https://aws.amazon.com/lambda/?nc1=h_ls) and fully integrated into the Amazon SageMaker service. Serverless Inference looks ideal for workloads that have idle periods, can tolerate cold starts, aren't latency and throughput critical, cost-effective & fast proofs-of-concept.
Talking about cost-effectiveness with Amazon SageMaker Serverless inference you only pay for the compute capacity used to process inference requests, billed by the millisecond, and the amount of data processed and it comes with a free tier of "150,000 seconds of inference duration". Learn more about pricing [here](https://aws.amazon.com/sagemaker/pricing/).
Since Amazon SageMaker Serverless Inference is built on top of AWS Lambda we not only have the already existing limitations for AWS Lambda, like [cold starts](https://aws.amazon.com/blogs/compute/new-for-aws-lambda-predictable-start-up-times-with-provisioned-concurrency/), we also have additional SageMaker Serverless specific limitations. All of the following limitations existed at the time of writing the blog.
- Compared to AWS Lambda does SageMaker Serverless Inference only supports up to 6GB of memory.
- The maximum concurrency for a single endpoint is limited to 50 with a total concurrency of 200 per accounts. Meaning 1 endpoint can scale at max to 50 concurrent running environments/functions.
- The endpoint must respond successfully to health checks within 3 minutes (for me unclear, because serverless functions are normally not always "warm")
- The "function" timeout for the container to respond to inference requests is 1 minute.
- **found limitation when testing:** Currently, Transformer models > 512MB create errors
You can find all of the current limitations and configurations in the [documentation of Serverless Inference](https://docs.aws.amazon.com/sagemaker/latest/dg/serverless-endpoints.html#serverless-endpoints-how-it-works).
At the time of writing this blog Serverless Inference is in Preview and only available in the following 6 Regions: US East (N. Virginia) `us-east-1`, US East (Ohio) `us-east-2`, US West (Oregon) `us-west-2`, Europe (Ireland) `eu-west-1`, Asia Pacific (Tokyo) `ap-northeast-1` and Asia Pacific (Sydney) `ap-southeast-2`.
---
## Tutorial
Before we get started, I’d like to give you some information about what we are going to do. We are going to create an Amazon Serverless SageMaker Endpoint using the [Hugging Face Inference DLC](https://huggingface.co/docs/sagemaker/main). The Hugging Face Inference DLC are pre-build optimized containers including all required packages to run seamless, optimized Inference with Hugging Face Transformers. Next to the optimized Deep Learning Frameworks it also includes the [SageMaker Hugging Face Inference Toolkit](https://github.com/aws/sagemaker-huggingface-inference-toolkit) an open-source library for serving 🤗 Transformers models on Amazon SageMaker, with default pre-processing, predicting, and postprocessing.
We are going to use the [AWS Cloud Development Kit (AWS CDK)](https://aws.amazon.com/cdk/?nc1=h_ls) to create our infrastructure for our Serverless Endpoint. More specifically, we are going to build an application using the Hugging Face Inference DLC for serverless model serving and Amazon [API Gateway](https://aws.amazon.com/de/api-gateway/) as a proxy to call our Amazon SageMaker Endpoint.
You can find the code for it in this [Github Repository](https://github.com/philschmid/cdk-samples/tree/master/sagemaker-serverless-huggingface-endpoint).
### Architecture
![architecture](/static/blog/serverless-transformers-sagemaker-huggingface/architecture.png)
Before we get started, make sure you have the [AWS CDK installed](https://docs.aws.amazon.com/cdk/latest/guide/getting_started.html#getting_started_install) and [configured your AWS credentials](https://docs.aws.amazon.com/cdk/latest/guide/getting_started.html#getting_started_prerequisites).
**What are we going to do:**
- selecting a model from the [Hugging Face Hub](https://huggingface.co/models)
- bootstrap our CDK project
- Deploy the model using CDK and Serverless Inference
- Run inference and test the API
### Selecting a model from the [Hugging Face Hub](https://huggingface.co/models)
For those of you who don’t what the Hugging Face Hub is you should definitely take a look [here](https://huggingface.co/docs/hub/main). But the TL;DR; is that the Hugging Face Hub is an open community-driven collection of state-of-the-art models. At the time of writing the blog post, we have 20 000+ available free models to use.
To select the model we want to use we navigate to [hf.co/models](http://hf.co/models) then pre-filter using the task on the left, e.g. `summarization`. For this blog post, I went with the [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6), which was fine-tuned on CNN articles for summarization.
![hugging face hub](/static/blog/serverless-transformers-sagemaker-huggingface/hub.png)
## **Bootstrap our CDK project**
Deploying applications using the CDK may require additional resources for CDK to store for example assets. The process of provisioning these initial resources is called [bootstrapping](https://docs.aws.amazon.com/cdk/latest/guide/bootstrapping.html). So before being able to deploy our application, we need to make sure that we bootstrapped our project.
1. Clone the Github repository
```bash
git clone https://github.com/philschmid/cdk-samples.git
cd sagemaker-serverless-huggingface-endpoint
```
2. Install the CDK required dependencies.
```bash
pip3 install -r requirements.txt
```
3. [Bootstrap](https://docs.aws.amazon.com/cdk/latest/guide/bootstrapping.html) your application in the cloud.
```bash
cdk bootstrap \
-c model="distilbert-base-uncased-finetuned-sst-2-english" \
-c task="text-classification"
```
## **Deploy the model using CDK**
Now we are able to deploy our application with the whole infrastructure and deploy our previous selected Transformer distilbert-base-uncased-finetuned-sst-2-english`to Amazon SageMaker Serverless Inference. Our application uses the [CDK context](https://docs.aws.amazon.com/cdk/latest/guide/context.html) to accept dynamic parameters for the deployment. We can provide our model with a key`model`and our task as a key`task`.
In our case we will provide `model=distilbert-base-uncased-finetuned-sst-2-english` and `task=text-classification`.
```bash
cdk deploy \
-c model="distilbert-base-uncased-finetuned-sst-2-english" \
-c task="text-classification"
```
_Note: Currently, Transformer models > 512MB create errors_
After running the `cdk deploy` command we will get an output of all resources, which are going to be created. We then confirm our deployment and the CDK will create all required resources, deploy our AWS Lambda function and our Model to Amazon SageMaker. This takes around 3-5 minutes.
After the deployment, the console output should look similar to this.
```python
✅ HuggingfaceServerlessSagemakerEndpoint
Outputs:
HuggingfaceServerlessSagemakerEndpoint.SageMakerEndpointUrl = "https://io6528yt5a.execute-api.us-east-1.amazonaws.com/prod/distilbert-base-uncased-finetuned-sst-2-english"
HuggingfaceServerlessSagemakerEndpoint.apiEndpoint9349E63C = "https://io6528yt5a.execute-api.us-east-1.amazonaws.com/prod/"
Stack ARN:
arn:aws:cloudformation:us-east-1:558105141721:stack/HuggingfaceServerlessSagemakerEndpoint/66c9f250-5db4-11ec-a371-0a05d9e5c641
```
## **Run inference and test the API**
After the deployment is successfully complete we can grab our Endpoint URL `HuggingfaceServerlessSagemakerEndpoint.SageMakerEndpointUrl` from the CLI output and use any REST client to test it.
The first request/cold start can take 10-30s depending on the model you use
![request](/static/blog/serverless-transformers-sagemaker-huggingface/rest.png)
the request as curl to copy
```bash
curl --request POST \
--url https://io6528yt5a.execute-api.us-east-1.amazonaws.com/prod/distilbert-base-uncased-finetuned-sst-2-english \
--header 'Content-Type: application/json' \
--data '{
"inputs":"With the new Amazon SageMaker Serverless Inference feature i can quickly test, models from Hugging Face. How awesome is this?"
}'
```
---
> While testing this functionality I ran into several issues when deploying models > 512 MB, like `sshleifer/distilbart-cnn-12-6`. I am in contact with the AWS team to hopefully solve this issue soon.
## Conclusion
With the help of the AWS CDK, we were able to deploy an Amazon SageMaker Serverless Inference Endpoint for Hugging Face Transformers with 1 Line of code. It is great to see how things have evolved over the last year from complex container building for Serverless Deep Learning up to just providing a `model_id` and `task` to create a fully managed serverless HTTP API.
But still, Serverless Inference is not yet ready for real production workload in my opinion. We are still having barriers we need to break to be able to recommend this for high scale, low latency real-time production use-cases.
SageMaker Serverless Inference will 100% help you accelerate your machine learning journey and enables you to build fast and cost-effective proofs-of-concept where cold starts or scalability is not mission-critical, which can quickly be moved to GPUs or more high scale environments
Additionally, SageMaker Serverless creates "only" real-time endpoints, meaning we are missing the great trigger, which exists in AWS Lambda, like S3 or SQS to build more event-driven applications and architectures. **But what is not, can still become.**
You should definitely give SageMaker Serverless Inference a try!
---
You can find the code [here](https://github.com/philschmid/cdk-samples/tree/master/sagemaker-serverless-huggingface-endpoint) and feel free to open a thread on the [forum](https://discuss.huggingface.co/).
Thanks for reading. If you have any questions, feel free to contact me, through [Github](https://github.com/huggingface/transformers), or on the [forum](https://discuss.huggingface.co/c/sagemaker/17). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Multilingual Serverless XLM RoBERTa with HuggingFace, AWS Lambda | https://www.philschmid.de/multilingual-serverless-xlm-roberta-with-huggingface | 2020-12-17 | [
"AWS",
"Serverless",
"BERT"
] | Learn how to build a Multilingual Serverless BERT Question Answering API with a model size of more than 2GB and then testing it in German and France. | Currently, we have 7.5 billion people living on the world in around 200 nations. Only
[1.2 billion people of them are native English speakers](https://en.wikipedia.org/wiki/List_of_countries_by_English-speaking_population).
This leads to a lot of unstructured non-English textual data.
Most of the tutorials and blog posts demonstrate how to build text classification, sentiment analysis,
question-answering, or text generation models with BERT based architectures in English. In order to overcome this
missing, we are going to build a multilingual Serverless Question Answering API.
Multilingual models describe machine learning models that can understand different languages. An example of a
multilingual model is [mBERT](https://storage.googleapis.com/bert_models/2018_11_23/multi_cased_L-12_H-768_A-12.zip)
from Google research.
[This model supports and understands 104 languages.](https://github.com/google-research/bert/blob/master/multilingual.md)
We are going to use the new AWS Lambda Container Support to build a Question-Answering API with a `xlm-roberta`.
Therefore we use the [Transformers](https://github.com/huggingface/transformers) library by HuggingFace,
the [Serverless Framework](https://serverless.com/), AWS Lambda, and Amazon ECR.
The special characteristic about this architecture is that we provide a "State-of-the-Art" model with more than 2GB and
that is served in a Serverless Environment
Before we start I wanted to encourage you to read my blog [philschmid.de](https://www.philschmi.de) where I have already
wrote several blog posts about [Serverless](https://www.philschmid.de/aws-lambda-with-custom-docker-image), how to
deploy [BERT in a Serverless Environment](https://www.philschmid.de/serverless-bert-with-huggingface-aws-lambda-docker),
or [How to fine-tune BERT models](https://www.philschmid.de/bert-text-classification-in-a-different-language).
You can find the complete code for it in this
[Github repository](https://github.com/philschmid/multilingual-serverless-qa-aws-lambda).
---
## Transformers Library by Huggingface
The [Transformers library](https://github.com/huggingface/transformers) provides state-of-the-art machine learning
architectures like BERT, GPT-2, RoBERTa, XLM, DistilBert, XLNet, T5 for Natural Language Understanding (NLU) and Natural
Language Generation (NLG). It also provides thousands of pre-trained models in 100+ different languages.
## AWS Lambda
[AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a serverless computing service that lets you
run code without managing servers. It executes your code only when required and scales automatically, from a few
requests per day to thousands per second.
## Amazon Elastic Container Registry
[Amazon Elastic Container Registry (ECR)](https://aws.amazon.com/ecr/?nc1=h_ls) is a fully managed container registry.
It allows us to store, manage, share docker container images. You can share docker containers privately within your
organization or publicly worldwide for anyone.
## Serverless Framework
[The Serverless Framework](https://www.serverless.com/) helps us develop and deploy AWS Lambda functions. It’s a CLI
that offers structure, automation, and best practices right out of the box.
---
## The Architecture
![architecture](/static/blog/multilingual-serverless-xlm-roberta-with-huggingface/architektur.png)
---
## Tutorial
Before we get started, make sure you have the [Serverless Framework](https://serverless.com/) configured and set up. You
also need a working `docker` environment. We use `docker` to create our own custom image including all needed `Python`
dependencies and our multilingual `xlm-roberta` model, which we then use in our AWS Lambda function. Furthermore, you
need access to an AWS Account to create an IAM User, an ECR Registry, an API Gateway, and the AWS Lambda function.
We design the API in the following way:
We send a context (small paragraph) and a question to it and respond with the answer to the question. As model, we are
going to use the `xlm-roberta-large-squad2` trained by [deepset.ai](https://deepset.ai/) from the
[transformers model-hub](https://huggingface.co/deepset/xlm-roberta-large-squad2#). The model size is more than 2GB.
It's huge.
**What are we going to do:**
- create a `Python` Lambda function with the Serverless Framework.
- add the multilingual `xlm-roberta` model to our function and create an inference pipeline.
- Create a custom `docker` image and test it.
- Deploy a custom `docker` image to ECR.
- Deploy AWS Lambda function with a custom `docker` image.
- Test our Multilingual Serverless API.
You can find the complete code in this
[Github repository](https://github.com/philschmid/multilingual-serverless-qa-aws-lambda).
---
## 1. Create a `Python` Lambda function with the Serverless Framework
First, we create our AWS Lambda function by using the Serverless CLI with the `aws-python3` template.
```bash
serverless create --template aws-python3 --path serverless-multilingual
```
This CLI command will create a new directory containing a `handler.py`, `.gitignore`, and `serverless.yaml` file. The
`handler.py` contains some basic boilerplate code.
---
## 2. Add the multilingual `xlm-roberta` model to our function and create an inference pipeline
To add our `xlm-roberta` model to our function we have to load it from the
[model hub of HuggingFace](https://huggingface.co/models). For this, I have created a python script. Before we can
execute this script we have to install the `transformers` library to our local environment and create a `model`
directory in our `serverless-multilingual/` directory.
```yaml
mkdir model & pip3 install torch==1.5.0 transformers==3.4.0
```
After we installed `transformers` we create `get_model.py` file and include the script below.
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
def get_model(model):
"""Loads model from Hugginface model hub"""
try:
model = AutoModelForQuestionAnswering.from_pretrained(model,use_cdn=True)
model.save_pretrained('./model')
except Exception as e:
raise(e)
def get_tokenizer(tokenizer):
"""Loads tokenizer from Hugginface model hub"""
try:
tokenizer = AutoTokenizer.from_pretrained(tokenizer)
tokenizer.save_pretrained('./model')
except Exception as e:
raise(e)
get_model('deepset/xlm-roberta-large-squad2')
get_tokenizer('deepset/xlm-roberta-large-squad2')
```
To execute the script we run `python3 get_model.py` in the `serverless-multilingual/` directory.
```python
python3 get_model.py
```
_**Tip**: add the `model` directory to `.gitignore`._
The next step is to adjust our `handler.py` and include our `serverless_pipeline()`, which initializes our model and
tokenizer. It then returns a `predict` function, which we can use in our `handler`.
```python
import json
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, AutoConfig
def encode(tokenizer, question, context):
"""encodes the question and context with a given tokenizer"""
encoded = tokenizer.encode_plus(question, context)
return encoded["input_ids"], encoded["attention_mask"]
def decode(tokenizer, token):
"""decodes the tokens to the answer with a given tokenizer"""
answer_tokens = tokenizer.convert_ids_to_tokens(
token, skip_special_tokens=True)
return tokenizer.convert_tokens_to_string(answer_tokens)
def serverless_pipeline(model_path='./model'):
"""Initializes the model and tokenzier and returns a predict function that ca be used as pipeline"""
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForQuestionAnswering.from_pretrained(model_path)
def predict(question, context):
"""predicts the answer on an given question and context. Uses encode and decode method from above"""
input_ids, attention_mask = encode(tokenizer,question, context)
start_scores, end_scores = model(torch.tensor(
[input_ids]), attention_mask=torch.tensor([attention_mask]))
ans_tokens = input_ids[torch.argmax(
start_scores): torch.argmax(end_scores)+1]
answer = decode(tokenizer,ans_tokens)
return answer
return predict
# initializes the pipeline
question_answering_pipeline = serverless_pipeline()
def handler(event, context):
try:
# loads the incoming event into a dictonary
body = json.loads(event['body'])
# uses the pipeline to predict the answer
answer = question_answering_pipeline(question=body['question'], context=body['context'])
return {
"statusCode": 200,
"headers": {
'Content-Type': 'application/json',
'Access-Control-Allow-Origin': '*',
"Access-Control-Allow-Credentials": True
},
"body": json.dumps({'answer': answer})
}
except Exception as e:
print(repr(e))
return {
"statusCode": 500,
"headers": {
'Content-Type': 'application/json',
'Access-Control-Allow-Origin': '*',
"Access-Control-Allow-Credentials": True
},
"body": json.dumps({"error": repr(e)})
}
```
## 3. Create a custom `docker` image and test it.
Before we can create our `docker` we need to create a `requirements.txt` file with all the dependencies we want to
install in our `docker.`
We are going to use a lighter Pytorch Version and the transformers library.
```bash
https://download.pytorch.org/whl/cpu/torch-1.5.0%2Bcpu-cp38-cp38-linux_x86_64.whl
transformers==3.4.0
```
To containerize our Lambda Function, we create a `dockerfile` in the same directory and copy the following content.
```bash
FROM public.ecr.aws/lambda/python:3.8
# Copy function code and models into our /var/task
COPY ./ ${LAMBDA_TASK_ROOT}/
# install our dependencies
RUN python3 -m pip install -r requirements.txt --target ${LAMBDA_TASK_ROOT}
# Set the CMD to your handler (could also be done as a parameter override outside of the Dockerfile)
CMD [ "handler.handler" ]
```
Additionally we can add a `.dockerignore` file to exclude files from your container image.
```bash
README.md
*.pyc
*.pyo
*.pyd
__pycache__
.pytest_cache
serverless.yaml
get_model.py
```
To build our custom `docker` image we run.
```bash
docker build -t multilingual-lambda .
```
We can start our `docker` by running.
```bash
docker run -p 8080:8080 multilingual-lambda
```
Afterwards, in a separate terminal, we can then locally invoke the function using `curl` or a REST-Client.
```bash
ccurl --request POST \
--url http://localhost:8080/2015-03-31/functions/function/invocations \
--header 'Content-Type: application/json' \
--data '{
"body":"{\"context\":\"Saisonbedingt ging der Umsatz im ersten Quartal des Geschäftsjahres 2019 um 4 Prozent auf 1.970 Millionen Euro zurück, verglichen mit 1.047 Millionen Euro im vierten Quartal des vorangegangenen Geschäftsjahres. Leicht rückläufig war der Umsatz in den Segmenten Automotive (ATV) und Industrial Power Control (IPC). Im Vergleich zum Konzerndurchschnitt war der Rückgang im Segment Power Management & Multimarket (PMM) etwas ausgeprägter und im Segment Digital Security Solutions (DSS) deutlich ausgeprägter. Die Bruttomarge blieb von Quartal zu Quartal weitgehend stabil und fiel von 39,8 Prozent auf 39,5 Prozent. Darin enthalten sind akquisitionsbedingte Abschreibungen sowie sonstige Aufwendungen in Höhe von insgesamt 16 Millionen Euro, die hauptsächlich im Zusammenhang mit der internationalen Rectifier-Akquisition stehen. Die bereinigte Bruttomarge blieb ebenfalls nahezu unverändert und lag im ersten Quartal bei 40,4 Prozent, verglichen mit 40,6 Prozent im letzten Quartal des Geschäftsjahres 2018. Das Segmentergebnis für das erste Quartal des laufenden Fiskaljahres belief sich auf 359 Millionen Euro, verglichen mit 400 Millionen Euro ein Quartal zuvor. Die Marge des Segmentergebnisses sank von 19,5 Prozent auf 18,2 Prozent.\",\n\"question\":\"Was war die bereinigte Bruttomarge?\"\n}"}'
# {"statusCode": 200, "headers": {"Content-Type": "application/json", "Access-Control-Allow-Origin": "*", "Access-Control-Allow-Credentials": true}, "body": "{\"answer\": \"40,4 Prozent\"}"}%
```
_Beware we have to `stringify` our body since we are passing it directly into the function (only for testing)._
## 4. Deploy a custom `docker` image to ECR
Since we now have a local `docker` image we can deploy this to ECR. Therefore we need to create an ECR repository with
the name `multilingual-lambda`.
```bash
aws ecr create-repository --repository-name multilingual-lambda > /dev/null
```
To be able to push our images we need to login to ECR. We are using the `aws` CLI v2.x. Therefore we need to define some
environment variables to make deploying easier.
```bash
aws_region=eu-central-1
aws_account_id=891511646143
aws ecr get-login-password \
--region $aws_region \
| docker login \
--username AWS \
--password-stdin $aws_account_id.dkr.ecr.$aws_region.amazonaws.com
```
Next we need to `tag` / rename our previously created image to an ECR format. The format for this is
`{AccountID}.dkr.ecr.{region}.amazonaws.com/{repository-name}`
```bash
docker tag multilingual-lambda $aws_account_id.dkr.ecr.$aws_region.amazonaws.com/multilingual-lambda
```
To check if it worked we can run `docker images` and should see an image with our tag as name
![docker-image](/static/blog/multilingual-serverless-xlm-roberta-with-huggingface/docker-image.png)
Finally, we push the image to ECR Registry.
```bash
docker push $aws_account_id.dkr.ecr.$aws_region.amazonaws.com/multilingual-lambda
```
## 5. Deploy AWS Lambda function with a custom `docker` image
I provide the complete `serverless.yaml` for this example, but we go through all the details we need for our `docker`
image and leave out all standard configurations. If you want to learn more about the `serverless.yaml`, I suggest you
check out
[Scaling Machine Learning from ZERO to HERO](https://www.philschmid.de/scaling-machine-learning-from-zero-to-hero). In
this article, I went through each configuration and explained the usage of them.
_**Attention**: We need at least 9GB of memory and 300s as timeout._
```yaml
service: multilingual-qa-api
provider:
name: aws # provider
region: eu-central-1 # aws region
memorySize: 10240 # optional, in MB, default is 1024
timeout: 300 # optional, in seconds, default is 6
functions:
questionanswering:
image: 891511646143.dkr.ecr.eu-central-1.amazonaws.com/multilingual-lambda@sha256:4d08a8eb6286969a7594e8f15360f2ab6e86b9dd991558989a3b65e214ed0818
events:
- http:
path: qa # http path
method: post # http method
```
To use a `docker` image in our `serverlss.yaml` we have to add the `image` in our `function` section. The `image` has
the URL to our `docker` image also value.
For an ECR image, the URL should look like tihs `<account>.dkr.ecr.<region>.amazonaws.com/<repository>@<digest>` (e.g
`000000000000.dkr.ecr.sa-east-1.amazonaws.com/test-lambda-docker@sha256:6bb600b4d6e1d7cf521097177dd0c4e9ea373edb91984a505333be8ac9455d38`)
You can get the ecr url via the
[AWS Console](https://eu-central-1.console.aws.amazon.com/ecr/repositories?region=eu-central-1).
In order to deploy the function, we run `serverless deploy`.
```yaml
serverless deploy
```
After this process is done we should see something like this.
![serverless-deploy](/static/blog/multilingual-serverless-xlm-roberta-with-huggingface/serverless-deploy.png)
---
## 6. Test our Multilingual Serverless API
To test our Lambda function we can use Insomnia, Postman, or any other REST client. Just add a JSON with a `context` and
a `question` to the body of your request. Let´s try it with a `German` example and then with a `French` example.
_Beaware that if you invoke your function the first time there will be an cold start. Due to the model size. The cold
start is bigger than 30s so the first request will run into an API Gateway timeout._
**German:**
```json
{
"context": "Saisonbedingt ging der Umsatz im ersten Quartal des Geschäftsjahres 2019 um 4 Prozent auf 1.970 Millionen Euro zurück, verglichen mit 1.047 Millionen Euro im vierten Quartal des vorangegangenen Geschäftsjahres. Leicht rückläufig war der Umsatz in den Segmenten Automotive (ATV) und Industrial Power Control (IPC). Im Vergleich zum Konzerndurchschnitt war der Rückgang im Segment Power Management & Multimarket (PMM) etwas ausgeprägter und im Segment Digital Security Solutions (DSS) deutlich ausgeprägter. Die Bruttomarge blieb von Quartal zu Quartal weitgehend stabil und fiel von 39,8 Prozent auf 39,5 Prozent. Darin enthalten sind akquisitionsbedingte Abschreibungen sowie sonstige Aufwendungen in Höhe von insgesamt 16 Millionen Euro, die hauptsächlich im Zusammenhang mit der internationalen Rectifier-Akquisition stehen. Die bereinigte Bruttomarge blieb ebenfalls nahezu unverändert und lag im ersten Quartal bei 40,4 Prozent, verglichen mit 40,6 Prozent im letzten Quartal des Geschäftsjahres 2018. Das Segmentergebnis für das erste Quartal des laufenden Fiskaljahres belief sich auf 359 Millionen Euro, verglichen mit 400 Millionen Euro ein Quartal zuvor. Die Marge des Segmentergebnisses sank von 19,5 Prozent auf 18,2 Prozent.",
"question": "Was war die bereinigte Bruttomarge?"
}
```
Our `serverless_pipeline()` answered our question correctly with `40,4 Prozent`.
![insomnia-ger](/static/blog/multilingual-serverless-xlm-roberta-with-huggingface/insomnia-ger.png)
**French:**
```json
{
"context": "En raison de facteurs saisonniers, le chiffre d'affaires du premier trimestre de l'exercice 2019 a diminué de 4 % pour atteindre 1 970 millions d'euros, contre 1 047 millions d'euros au quatrième trimestre de l'exercice précédent. Les ventes ont légèrement diminué dans les segments de l'automobile (ATV) et de la régulation de la puissance industrielle (IPC). Par rapport à la moyenne du groupe, la baisse a été légèrement plus prononcée dans le segment de la gestion de l'énergie et du multimarché (PMM) et nettement plus prononcée dans le segment des solutions de sécurité numérique (DSS). La marge brute est restée largement stable d'un trimestre à l'autre, passant de 39,8 % à 39,5 %. Ce montant comprend l'amortissement lié à l'acquisition et d'autres dépenses totalisant 16 millions d'euros, principalement liées à l'acquisition de Rectifier international. La marge brute ajustée est également restée pratiquement inchangée à 40,4 % au premier trimestre, contre 40,6 % au dernier trimestre de l'exercice 2018. Le bénéfice sectoriel pour le premier trimestre de l'exercice en cours s'est élevé à 359 millions d'euros, contre 400 millions d'euros un trimestre plus tôt. La marge du résultat du segment a diminué de 19,5 % à 18,2 %.",
"question": "Quelle était la marge brute ajustée ?"
}
```
Our `serverless_pipeline()` answered our question correctly with `40,4%`.
![insomnia-fr](/static/blog/multilingual-serverless-xlm-roberta-with-huggingface/insomnia-fr.png)
## Conclusion
The release of the AWS Lambda Container Support and the increase from Memory up to 10GB enables much wider use of AWS
Lambda and Serverless. It fixes many existing problems and gives us greater scope for the deployment of serverless
applications.
**I mean we deployed a docker container containing a "State-of-the-Art" multilingual NLP model bigger than 2GB in a
Serverless Environment without the need to manage any server.**
It will automatically scale up to thousands of parallel requests without any worries.
The future looks more than golden for AWS Lambda and Serverless.
---
You can find the [GitHub repository](https://github.com/philschmid/multilingual-serverless-qa-aws-lambda) with the
complete code [here](https://github.com/philschmid/multilingual-serverless-qa-aws-lambda).
Thanks for reading. If you have any questions, feel free to contact me or comment on this article. You can also connect
with me on [Twitter](https://twitter.com/_philschmid) or
[LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Accelerate Vision Transformer (ViT) with Quantization using Optimum | https://www.philschmid.de/optimizing-vision-transformer | 2022-07-19 | [
"ViT",
"OnnxRuntime",
"HuggingFace",
"Optimization"
] | Learn how to optimize Vision Transformer (ViT) using Hugging Face Optimum. You will learn how dynamically quantize a ViT model for ONNX Runtime. | _last update: 2022-11-18_
In this session, you will learn how to optimize Vision Transformers models using Optimum. The session will show you how to dynamically quantize and optimize a ViT model using [Hugging Face Optimum](https://huggingface.co/docs/optimum/index) and [ONNX Runtime](https://onnxruntime.ai/). Hugging Face Optimum is an extension of 🤗 Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on targeted hardware.
_Note: dynamic quantization is currently only supported for CPUs, so we will not be utilizing GPUs / CUDA in this session._
By the end of this session, you see how quantization and optimization with Hugging Face Optimum can result in significant increase in model latency while keeping almost 100% of the full-precision model. Furthermore, you’ll see how to easily apply some advanced quantization and optimization techniques shown here so that your models take much less of an accuracy hit than they would otherwise.
You will learn how to:
1. [Setup Development Environment](#1-setup-development-environment)
2. [Convert a Hugging Face `Transformers` model to ONNX for inference](#2-convert-a-hugging-face-transformers-model-to-onnx-for-inference)
3. [Apply dynamic quantization using `ORTQuantizer` from Optimum](#3-apply-dynamic-quantization-using-ortquantizer-from-optimum)
4. [Test inference with the quantized model](#4-test-inference-with-the-quantized-model)
5. [Evaluate the performance and speed](#5-evaluate-the-performance-and-speed)
Let's get started! 🚀
_This tutorial was created and run on an c6i.xlarge AWS EC2 Instance._
---
## Quick intro: Vision Transformer (ViT) by Google Brain
The Vision Transformer (ViT) is basically BERT, but applied to images. It attains excellent results compared to state-of-the-art convolutional networks. In order to provide images to the model, each image is split into a sequence of fixed-size patches (typically of resolution 16x16 or 32x32), which are linearly embedded. One also adds a [CLS] token at the beginning of the sequence in order to classify images. Next, one adds absolute position embeddings and provides this sequence to the Transformer encoder.
![vision-transformer-architecture](/static/blog/optimizing-vision-transformer/vision-transformer-architecture.webp)
- Paper: https://arxiv.org/abs/2010.11929
- Official repo (in JAX): https://github.com/google-research/vision_transformer
## 1. Setup Development Environment
Our first step is to install Optimum, along with Evaluate and some other libraries. Running the following cell will install all the required packages for us including Transformers, PyTorch, and ONNX Runtime utilities:
```python
!pip install "optimum[onnxruntime]==1.5.0" evaluate[evaluator] sklearn mkl-include mkl --upgrade
```
> If you want to run inference on a GPU, you can install 🤗 Optimum with `pip install optimum[onnxruntime-gpu]`.
## 2. Convert a Hugging Face `Transformers` model to ONNX for inference
Before we can start qunatizing we need to convert our vanilla `transformers` model to the `onnx` format. To do this we will use the new [ORTModelForImageClassification](https://huggingface.co/docs/optimum/onnxruntime/modeling_ort#optimum.onnxruntime.ORTModelForImageClassification) class calling the `from_pretrained()` method with the `from_transformers` attribute. The model we are using is the a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the [beans](https://huggingface.co/datasets/beans) dataset ([nateraw/vit-base-beans](https://huggingface.co/nateraw/vit-base-beans)) achieving an accuracy of 96.88%.
```python
from optimum.onnxruntime import ORTModelForImageClassification
from transformers import AutoFeatureExtractor
from pathlib import Path
model_id="nateraw/vit-base-beans"
onnx_path = Path("onnx")
# load vanilla transformers and convert to onnx
model = ORTModelForImageClassification.from_pretrained(model_id, from_transformers=True)
preprocessor = AutoFeatureExtractor.from_pretrained(model_id)
# save onnx checkpoint and tokenizer
model.save_pretrained(onnx_path)
preprocessor.save_pretrained(onnx_path)
```
One neat thing about 🤗 Optimum, is that allows you to run ONNX models with the `pipeline()` function from 🤗 Transformers. This means that you get all the pre- and post-processing features for free, without needing to re-implement them for each model! Here's how you can run inference with our vanilla ONNX model:
`https://datasets-server.huggingface.co/assets/beans/--/default/validation/30/image/image.jpg`
![bean-sample](/static/blog/optimizing-vision-transformer/bean.jpeg)
```python
from transformers import pipeline
vanilla_clf = pipeline("image-classification", model=model, feature_extractor=preprocessor)
print(vanilla_clf("https://datasets-server.huggingface.co/assets/beans/--/default/validation/30/image/image.jpg"))
```
If you want to learn more about exporting transformers model check-out [Convert Transformers to ONNX with Hugging Face Optimum](https://www.philschmid.de/convert-transformers-to-onnx) blog post
## 3. Apply dynamic quantization using `ORTQuantizer` from Optimum
The `ORTQuantizer` can be used to apply dynamic quantization to decrease the size of the model size and accelerate latency and inference.
_We use the `avx512_vnni` config since the instance is powered by an intel ice-lake CPU supporting avx512._
```python
from optimum.onnxruntime import ORTQuantizer
from optimum.onnxruntime.configuration import AutoQuantizationConfig
# create ORTQuantizer and define quantization configuration
dynamic_quantizer = ORTQuantizer.from_pretrained(model)
dqconfig = AutoQuantizationConfig.avx512_vnni(is_static=False, per_channel=False)
# apply the quantization configuration to the model
model_quantized_path = dynamic_quantizer.quantize(
save_dir=onnx_path,
quantization_config=dqconfig,
)
```
Lets quickly check the new model size.
```python
import os
# get model file size
size = os.path.getsize(onnx_path / "model.onnx")/(1024*1024)
quantized_model = os.path.getsize(onnx_path / "model_quantized.onnx")/(1024*1024)
print(f"Model file size: {size:.2f} MB")
print(f"Quantized Model file size: {quantized_model:.2f} MB")
# Model file size: 330.27 MB
# Quantized Model file size: 84.50 MB
```
## 4. Test inference with the quantized model
[Optimum](https://huggingface.co/docs/optimum/main/en/pipelines#optimizing-with-ortoptimizer) has built-in support for [transformers pipelines](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#pipelines). This allows us to leverage the same API that we know from using PyTorch and TensorFlow models.
Therefore we can load our quantized model with `ORTModelForImageClassification` class and transformers `pipeline`.
```python
from optimum.onnxruntime import ORTModelForImageClassification
from transformers import pipeline, AutoFeatureExtractor
model = ORTModelForImageClassification.from_pretrained(onnx_path, file_name="model_quantized.onnx")
preprocessor = AutoFeatureExtractor.from_pretrained(onnx_path)
q8_clf = pipeline("image-classification", model=model, feature_extractor=preprocessor)
print(q8_clf("https://datasets-server.huggingface.co/assets/beans/--/default/validation/30/image/image.jpg"))
```
## 5. Evaluate the performance and speed
To evaluate the model performance and speed are we going to use a the `test` split of the [beans](https://huggingface.co/datasets/beans) dataset containing only 3 classes ('angular_leaf_spot', 'bean_rust', 'healthy') and 128 images. The evaluation was done by using [Huggingface/evaluate](https://huggingface.co/docs/evaluate/index) a library for easily evaluating machine learning models and datasets.
We evaluated the vanilla model outside of this example using the same `evaluator` with the vanilla model achieving an accuraccy of `96.88%` on our dataset.
```python
from evaluate import evaluator
from datasets import load_dataset
e = evaluator("image-classification")
eval_dataset = load_dataset("beans",split=["test"])[0]
results = e.compute(
model_or_pipeline=q8_clf,
data=eval_dataset,
metric="accuracy",
input_column="image",
label_column="labels",
label_mapping=model.config.label2id,
strategy="simple",
)
print(f"Vanilla model: 96.88%")
print(f"Quantized model: {results['accuracy']*100:.2f}%")
print(f"The quantized model achieves {round(results['accuracy']/0.9688,4)*100:.2f}% accuracy of the fp32 model")
# Vanilla model: 96.88%
# Quantized model: 96.88%
# The quantized model achieves 99.99% accuracy of the fp32 model
```
Okay, now let's test the performance (latency) of our quantized model. We are going to use a the [beans sample](https://datasets-server.huggingface.co/assets/beans/--/default/validation/30/image/image.jpg) for the benchmark. To keep it simple, we are going to use a python loop and calculate the avg,mean & p95 latency for our vanilla model and for the quantized model.
```python
from time import perf_counter
import numpy as np
from PIL import Image
import requests
payload="https://datasets-server.huggingface.co/assets/beans/--/default/validation/30/image/image.jpg"
def measure_latency(pipe):
# prepare date
image = Image.open(requests.get(payload, stream=True).raw)
inputs = pipe.feature_extractor(images=image, return_tensors="pt")
latencies = []
# warm up
for _ in range(10):
_ = pipe.model(**inputs)
# Timed run
for _ in range(200):
start_time = perf_counter()
_ = pipe.model(**inputs)
latency = perf_counter() - start_time
latencies.append(latency)
# Compute run statistics
time_avg_ms = 1000 * np.mean(latencies)
time_std_ms = 1000 * np.std(latencies)
time_p95_ms = 1000 * np.percentile(latencies,95)
return f"P95 latency (ms) - {time_p95_ms}; Average latency (ms) - {time_avg_ms:.2f} +\- {time_std_ms:.2f};", time_p95_ms
vanilla_model=measure_latency(vanilla_clf)
quantized_model=measure_latency(q8_clf)
print(f"Vanilla model: {vanilla_model[0]}")
print(f"Quantized model: {quantized_model[0]}")
print(f"Improvement through quantization: {round(vanilla_model[1]/quantized_model[1],2)}x")
# Vanilla model: P95 latency (ms) - 165.06651640004284; Average latency (ms) - 149.00 +\- 11.22;
# Quantized model: P95 latency (ms) - 63.56140074997256; Average latency (ms) - 62.81 +\- 2.18;
# Improvement through quantization: 2.6x
```
We managed to accelerate our model latency from 165ms to 64ms or 2.6x while keeping 99.99% of the accuracy.
![performance](/static/blog/optimizing-vision-transformer/vit-performance.png)
## Conclusion
We successfully quantized our vanilla Transformers model with Hugging Face and managed to accelerate our model latency 165ms to 64ms or 2.6x while keeping 99.99% of the accuracy.
But I have to say that this isn't a plug and play process you can transfer to any Transformers model, task or dataset. |
Mount your AWS EFS volume into AWS Lambda with the Serverless Framework | https://www.philschmid.de/mount-your-aws-efs-volume-into-aws-lambda-with-the-serverless-framework | 2020-08-12 | [
"AWS",
"Serverless"
] | Leverage your Serverless architectures with mounting your AWS EFS volume into your AWS Lambda with the Serverless Framework. | "_Just like wireless internet has wires somewhere, serverless architectures still have servers somewhere. What
‘serverless’ really means is that as a developer you don’t have to think about those servers. You just focus on
code." -_ [serverless.com](https://serverless.com/learn/overview/)
This focus is only possible if we make some tradeoffs. Currently, all Serverless FaaS Services like
[AWS Lambda](https://aws.amazon.com/de/lambda/), [Google Cloud Functions](https://cloud.google.com/functions),
[Azure Functions](https://azure.microsoft.com/de-de/services/functions/) are having limits. For example, there is no
real state or no endless configurable memory.
These limitations have led to serverless architectures being used more for software development and less for machine
learning, especially deep learning.
A big hurdle to overcome in serverless deep learning with tools like [AWS Lambda](https://aws.amazon.com/de/lambda/),
[Google Cloud Functions](https://cloud.google.com/functions),
[Azure Functions](https://azure.microsoft.com/de-de/services/functions/) is storage.
[Tensorflow](https://www.tensorflow.org/) and [Pytorch](https://pytorch.org/) are having a huge size and newer "State of
the art" models like BERT have a size of over 300MB. So far it was only possible to use them if you used some
compression techniques. You can check out two of my posts on how you could do this:
- [Scaling Machine Learning from ZERO to HERO](https://www.philschmid.de/scaling-machine-learning-from-zero-to-hero)
- [Serverless BERT with HuggingFace and AWS Lambda](https://www.philschmid.de/serverless-bert-with-huggingface-and-aws-lambda)
But last month AWS announced mountable storage to your serverless functions. They added support for
[](https://aws.amazon.com/lambda/?nc1=h_ls)[Amazon Elastic File System (EFS)](https://aws.amazon.com/efs/?nc1=h_ls), a
scalable and elastic NFS file system. This allows you to mount your AWS EFS filesystem to your
[AWS Lambda](https://aws.amazon.com/lambda/?nc1=h_ls) function.
In their
[blog post](https://aws.amazon.com/blogs/compute/using-amazon-efs-for-aws-lambda-in-your-serverless-applications/), they
explain to connect an AWS lambda function to AWS EFS. The blog post is very nice, definitely check it out.
In this post, we are going to do the same, but a bit better with using the
[Serverless Framework](https://www.serverless.com/) and without the manual work.
![serverless-architecture](/static/blog/mount-your-aws-efs-volume-into-aws-lambda-with-the-serverless-framework/serverless-architecture.svg)
**_PREVIEW:_** I am building a CLI tool called `efsync` which enables you to upload automatically files (pip packages,
ML models, ...) to an EFS file system.
Until I finished `efsync` you can use
[AWS Datasync to upload you data to an AWS EFS file system](https://docs.aws.amazon.com/efs/latest/ug/gs-step-four-sync-files.html).
---
## What is AWS Lambda?
You are probably familiar with [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html), but to make
things clear AWS Lambda is a computing service that lets you run code without managing servers. It executes your code
only when required and scales automatically, from a few requests per day to thousands per second. You only pay for the
compute time you consume - there is no charge when your code is not running.
![AWS Lambda Logo](/static/blog/mount-your-aws-efs-volume-into-aws-lambda-with-the-serverless-framework/lambda-logo.png)
[https://aws.amazon.com/de/lambda/features/](https://aws.amazon.com/de/lambda/features/)
---
## What is AWS EFS?
Amazon EFS is a fully-managed service that makes it easy to set up, scale, and cost-optimize file storage in the Amazon
Cloud. Amazon EFS-filesystems can automatically scale from gigabytes to petabytes of data without needing to provision
storage. Amazon EFS is designed to be highly durable and highly available. With Amazon EFS, there is no minimum fee or
setup costs, and you pay only for what you use.
---
## Serverless Framework
The Serverless Framework helps us develop and deploy AWS Lambda functions. It’s a CLI that offers structure, automation,
and best practices right out of the box. It also allows us to focus on building sophisticated, event-driven, serverless
architectures, comprised of functions and events.
![Serverless Framework Logo](/static/blog/mount-your-aws-efs-volume-into-aws-lambda-with-the-serverless-framework/serverless-logo.png)
If you aren’t familiar or haven’t set up the Serverless Framework, take a look at
this [quick-start with the Serverless Framework](https://serverless.com/framework/docs/providers/aws/guide/quick-start/).
---
## Tutorial
We build an AWS Lambda function with `python3.8` as runtime, which is going to import and use `pip packages` located on
our EFS-filesystem. As an example, we use `pandas` and `pyjokes`. They could easily be replaced by `Tensorflow` or
`Pytorch`.
Before we get started, make sure you have the [Serverless Framework](https://serverless.com/) configured and an
EFS-filesystem set up with the required dependencies. We are not going to cover the steps on how to install the
dependencies and upload them to EFS in this blog post. You can either user
[AWS Datasync](https://docs.aws.amazon.com/efs/latest/ug/gs-step-four-sync-files.html) or start an `ec2-instance`
connect with `ssh`, mount the EFS-filesystem with `amazon-efs-utils`, and use `pip install -t` to install the pip
packages on efs.
**We are going to do:**
- create a Python Lambda function with the Serverless Framework
- configure the `serverless.yaml` and add our `EFS-filesystem` as mount volume
- adjust the `handler.py` and import `pandas` and `pyjokes` from EFS
- deploy & test the function
---
## 1. Create a Python Lambda function
First, we create our AWS Lambda function by using the Serverless CLI with the `aws-python3` template.
```bash
serverless create --template aws-python3 --path serverless-efs
```
This CLI command creates a new directory containing a `handler.py`, `.gitignore`, and `serverless.yaml` file. The
`handler.py` contains some basic boilerplate code.
```python
import json
def hello(event, context):
body = {
"message": "Go Serverless v1.0! Your function executed successfully!",
"input": event
}
response = {
"statusCode": 200,
"body": json.dumps(body)
}
return response
```
---
## 2. Configure the `serverless.yaml` and add our `EFS-filesystem` as mount volume
I provide the complete `serverless.yaml`for this example, but we go through all the details we need for our
EFS-filesystem and leave out all standard configurations. If you want to learn more about the `serverless.yaml`, I
suggest you check out
[Scaling Machine Learning from ZERO to HERO](https://www.philschmid.de/scaling-machine-learning-from-zero-to-hero). In
this article, I went through each configuration and explain the usage of them.
```yaml
service: blog-serverless-efs
plugins:
- serverless-pseudo-parameters
custom:
efsAccessPoint: <your-efs-accesspoint>
LocalMountPath: <mount-directory-in-aws-lambda-function>
subnetsId: <subnetid-in-which-efs-is>
securityGroup: <any-security-group>
provider:
name: aws
runtime: python3.8
region: eu-central-1
package:
exclude:
- node_modules/**
- .vscode/**
- .serverless/**
- .pytest_cache/**
- __pychache__/**
functions:
joke:
handler: handler.handler
environment: # Service wide environment variables
MNT_DIR: ${self:custom.LocalMountPath}
vpc:
securityGroupIds:
- ${self:custom.securityGroup}
subnetIds:
- ${self:custom.subnetsId}
iamManagedPolicies:
- arn:aws:iam::aws:policy/AmazonElasticFileSystemClientReadWriteAccess
events:
- http:
path: joke
method: get
resources:
extensions:
# Name of function <joke>
JokeLambdaFunction:
Properties:
FileSystemConfigs:
- Arn: 'arn:aws:elasticfilesystem:${self:provider.region}:#{AWS::AccountId}:access-point/${self:custom.efsAccessPoint}'
LocalMountPath: '${self:custom.LocalMountPath}'
```
First, we need to install the `serverless-pseudo-parameters` plugin with the following command.
```json
npm install serverless-pseudo-parameters
```
We use the `serverless-pseudo-parameters` plugin to get our `AWS::AccountID` referenced in the `serverless.yaml`. All
custom needed variables are referenced under `custom`.
- `efsAccessPoint` should be the value of your EFS access point. You can find it in the AWS Management Console under
`EFS`. This one should look similar to this `fsap-0a31095162dd0ca44`
- `LocalMountPath` is the path under which EFS is mounted in the AWS Lambda function
- `subnetsId` should have the same id as the EFS-filesystem. If you started your filesystem in multiple Availability
Zones you can choose the one you want.
- `securityGroup` can be any security group in the AWS account. We need this to deploy our AWS Lambda function into the
required subnet. We can use the `default` security group id. This one should look like this `sg-1018g448`.
We utilize Cloudformation extensions to mount the EFS-filesystem after our lambda is created. Therefore we use this
little snippet.
[Extensions can be used to override Cloudformation Resources](https://www.serverless.com/framework/docs/providers/aws/guide/resources#override-aws-cloudformation-resource).
```json
resources:
extensions:
# Name of function <joke>
JokeLambdaFunction:
Properties:
FileSystemConfigs:
- Arn: "arn:aws:elasticfilesystem:${self:provider.region}:#{AWS::AccountId}:access-point/${self:custom.efsAccessPoint}"
LocalMountPath: "${self:custom.LocalMountPath}"
```
---
## 3. Adjust the `handler.py` and import `pandas` and `pyjokes` from EFS
The last step before we can deploy is to adjust our `handler.py` and import `pandas` and `pyjokes` from EFS. In my
example, I used `/mnt/efs` as `localMountPath` and installed my pip packages in `lib/`.
To use our dependencies from our EFS-filesystem we have to add our `localMountPath` path to our `PYTHONPATH`. Therefore
we add a small `try/except` statement at the top of your `handler.py`, which appends our `mnt/efs/lib` to the
`PYTHONPATH`. Lastly, we add some demo calls to show our 2 dependencies work.
```python
try:
import sys
import os
sys.path.append(os.environ['MNT_DIR']+'/lib') # nopep8 # noqa
except ImportError:
pass
import json
import os
import pyjokes
from pandas import DataFrame
def handler(event, context):
data = {'Product': ['Desktop Computer', 'Tablet', 'iPhone', 'Laptop'],
'Price': [700, 250, 800, 1200]
}
df = DataFrame(data, columns=['Product', 'Price'])
body = {
"frame": df.to_dict(),
"joke": pyjokes.get_joke()
}
response = {
"statusCode": 200,
"body": json.dumps(body)
}
return response
```
---
## 4. Deploy & Test the function
In order to deploy the function we only have to run `serverless deploy`.
After this process is done we should see something like this.
![serverless bash deployment](/static/blog/mount-your-aws-efs-volume-into-aws-lambda-with-the-serverless-framework/serverless-deployment.png)
To test our Lambda function we can use Insomnia, Postman, or any other REST client. Just send a GET-Request to our
created endpoint. The answer should look like this.
![insomnia-request](/static/blog/mount-your-aws-efs-volume-into-aws-lambda-with-the-serverless-framework/insomnia-request.png)
The first request to the cold AWS Lambda function took around 8 seconds. After it is warmed up it takes around 100-150ms
as you can see in the screenshot.
The best thing is, our AWS Lambda function automatically scales up if there are several incoming requests up to
thousands of parallel requests without any worries.
If you _rebuild this, you have to be careful that the first request could take a while._
---
You can find the [GitHub repository](https://github.com/philschmid/serverless-efs-and-aws-lambda) with the complete code
[here](https://github.com/philschmid/serverless-efs-and-aws-lambda).
Thanks for reading. If you have any questions, feel free to contact me or comment on this article. You can also connect
with me on [Twitter](https://twitter.com/_philschmid) or
[LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Fine-tuning LayoutLM for document-understanding using Keras & Hugging Face Transformers | https://www.philschmid.de/fine-tuning-layoutlm-keras | 2022-10-13 | [
"Keras",
"HuggingFace",
"Transformers",
"LayoutLM"
] | Learn how to fine-tune LayoutLM for document-understand using Keras & Hugging Face Transformers. | In this blog, you will learn how to fine-tune [LayoutLM (v1)](https://huggingface.co/docs/transformers/model_doc/layoutlm) for document-understand using Tensorflow, Keras & Hugging Face Transformers. LayoutLM is a document image understanding and information extraction transformers and was originally published by Microsoft Research as PyTorch model, which was later converted to Keras by the Hugging Face Team. LayoutLM (v1) is the only model in the LayoutLM family with an MIT-license, which allows it to be used for commercial purposes compared to other LayoutLMv2/LayoutLMv3.
We will use the [FUNSD](https://guillaumejaume.github.io/FUNSD/) dataset a collection of 199 fully annotated forms. More information for the dataset can be found at the [dataset page](https://guillaumejaume.github.io/FUNSD/).
You will learn how to:
1. [Setup Development Environment](#1-setup-development-environment)
2. [Load and prepare FUNSD dataset](#2-load-and-prepare-funsd-dataset)
3. [Fine-tune and evaluate LayoutLM](#3-fine-tune-and-evaluate-layoutlm)
4. [Run inference and parse form](#4-run-inference-and-parse-form)
Before we can start, make sure you have a [Hugging Face Account](https://huggingface.co/join) to save artifacts and experiments.
## Quick intro: LayoutLM by Microsoft Research
LayoutLM is a multimodal Transformer model for document image understanding and information extraction transformers and can be used form understanding and receipt understanding.
![layout](/static/blog/fine-tuning-layoutlm-keras/layoutlm.png)
- Paper: https://arxiv.org/pdf/1912.13318.pdf
- Official repo: https://github.com/microsoft/unilm/tree/master/layoutlm
---
Now we know how LayoutLM works, so let's get started. 🚀
_Note: This tutorial was created and run on a g4dn.xlarge AWS EC2 Instance including a NVIDIA T4._
## 1. Setup Development Environment
Our first step is to install the Hugging Face Libraries, including transformers and datasets. Running the following cell will install all the required packages.
Additinoally, we need to install an OCR-library to extract text from images. We will use [pytesseract](https://pypi.org/project/pytesseract/).
If you haven't set up a Tensorflow environment you can use the `conda` snippet below.
```bash
conda create --channel=conda-forge --name tf \
python=3.9 \
nvidia::cudatoolkit=11.2 \
tensorflow=2.10.0=*cuda112*py39*
```
```python
# ubuntu
!sudo apt install -y tesseract-ocr
# python
!pip install pytesseract transformers datasets evaluate seqeval tensorboard
# install git-fls for pushing model and logs to the hugging face hub
!sudo apt-get install git-lfs --yes
```
This example will use the [Hugging Face Hub](https://huggingface.co/models) as a remote model versioning service. To be able to push our model to the Hub, you need to register on the [Hugging Face](https://huggingface.co/join).
If you already have an account, you can skip this step.
After you have an account, we will use the `notebook_login` util from the `huggingface_hub` package to log into our account and store our token (access key) on the disk.
```python
from huggingface_hub import notebook_login
notebook_login()
```
## 2. Load and prepare FUNSD dataset
We will use the [FUNSD](https://guillaumejaume.github.io/FUNSD/) dataset a collection of 199 fully annotated forms. The dataset is available on Hugging Face at [nielsr/funsd](https://huggingface.co/datasets/nielsr/funsd).
_Note: The LayoutLM model doesn't have a `AutoProcessor` to create the our input documents, but we can use the `LayoutLMv2Processor` instead._
```python
processor_id="microsoft/layoutlmv2-base-uncased"
dataset_id ="nielsr/funsd"
```
To load the `funsd` dataset, we use the `load_dataset()` method from the 🤗 Datasets library.
```python
from datasets import load_dataset
dataset = load_dataset(dataset_id)
print(f"Train dataset size: {len(dataset['train'])}")
print(f"Test dataset size: {len(dataset['test'])}")
# Train dataset size: 149
# Test dataset size: 50
```
Lets checkout an example of the dataset.
```python
from PIL import Image, ImageDraw, ImageFont
image = Image.open(dataset['train'][40]['image_path'])
image = image.convert("RGB")
image.resize((350,450))
```
![sample](/static/blog/fine-tuning-layoutlm-keras/sample.png)
We can display all our classes by inspecting the features of our dataset. Those `ner_tags` will be later used to create a user friendly output after we fine-tuned our model.
```python
labels = dataset['train'].features['ner_tags'].feature.names
print(f"Available labels: {labels}")
id2label = {v: k for v, k in enumerate(labels)}
label2id = {k: v for v, k in enumerate(labels)}
# Available labels: ['O', 'B-HEADER', 'I-HEADER', 'B-QUESTION', 'I-QUESTION', 'B-ANSWER', 'I-ANSWER']
```
To train our model we need to convert our inputs (text/image) to token IDs. This is done by a 🤗 Transformers Tokenizer and PyTesseract. If you are not sure what this means check out [chapter 6](https://huggingface.co/course/chapter6/1?fw=tf) of the Hugging Face Course.
```python
from transformers import LayoutLMv2Processor
# use LayoutLMv2 processor without ocr since the dataset already includes the ocr text
processor = LayoutLMv2Processor.from_pretrained(processor_id, apply_ocr=False)
```
Before we can process our dataset we need to define the `features` or the processed inputs, which are later based into the model. Features are a special dictionary that defines the internal structure of a dataset.
Compared to traditional NLP datasets we need to add the `bbox` feature, which is a 2D array of the bounding boxes for each token.
```python
from PIL import Image
from functools import partial
from datasets import Features, Sequence, ClassLabel, Value, Array2D
# we need to define custom features
features = Features(
{
"input_ids": Sequence(feature=Value(dtype="int64")),
"attention_mask": Sequence(Value(dtype="int64")),
"token_type_ids": Sequence(Value(dtype="int64")),
"bbox": Array2D(dtype="int64", shape=(512, 4)),
"labels": Sequence(ClassLabel(names=labels)),
}
)
# preprocess function to perpare into the correct format for the model
def process(sample, processor=None):
encoding = processor(
Image.open(sample["image_path"]).convert("RGB"),
sample["words"],
boxes=sample["bboxes"],
word_labels=sample["ner_tags"],
padding="max_length",
truncation=True,
)
del encoding["image"]
return encoding
# process the dataset and format it to pytorch
proc_dataset = dataset.map(
partial(process, processor=processor),
remove_columns=["image_path", "words", "ner_tags", "id", "bboxes"],
features=features,
)
print(proc_dataset["train"].features.keys())
# dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'bbox','lables'])
```
## 3. Fine-tune and evaluate LayoutLM
After we have processed our dataset, we can start training our model. Therefore we first need to load the [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) model with the `TFLayoutLMForTokenClassification` class with the label mapping of our dataset.
```python
from transformers import TFLayoutLMForTokenClassification
# huggingface hub model id
model_id = "microsoft/layoutlm-base-uncased"
# load model with correct number of labels and mapping
model = TFLayoutLMForTokenClassification.from_pretrained(
model_id, num_labels=len(labels), label2id=label2id, id2label=id2label
)
```
Before we can train our model we have todefine the hyperparameters we want to use for our training. Therefore will create a `dataclass`.
```python
from dataclasses import dataclass
from huggingface_hub import HfFolder
import tensorflow as tf
@dataclass
class Hyperparameters:
num_train_epochs: int = 8
train_batch_size: int = 8
eval_batch_size: int = 8
learning_rate: float = 3e-5
weight_decay_rate: float = 0.01
output_dir: str = 'layoutlm-funsd-tf'
hub_model_id: str = f'layoutlm-funsd-tf'
hub_token: str = HfFolder.get_token() # or your token directly "hf_xxx"
fp16 = True
# Train in mixed-precision float16
def __post_init__(self):
if self.fp16:
tf.keras.mixed_precision.set_global_policy("mixed_float16")
hp = Hyperparameters()
```
The next step is to convert our dataset to a `tf.data.Dataset` this can be done by the `model.prepare_tf_dataset`.
```python
# prepare train dataset
tf_train_dataset = model.prepare_tf_dataset(
proc_dataset["train"],
batch_size=hp.train_batch_size,
shuffle=True,
)
# prepare test dataset
tf_test_dataset = model.prepare_tf_dataset(
proc_dataset["test"],
batch_size=hp.eval_batch_size,
shuffle=False,
)
```
As mentioned in the beginning we want to use the [Hugging Face Hub](https://huggingface.co/models) for model versioning and monitoring. Therefore we want to push our model weights, during training and after training to the Hub to version it. Additionally, we want to track the performance during training therefore we will push the `Tensorboard` logs along with the weights to the Hub to use the "Training Metrics" Feature to monitor our training in real-time.
Additionally, we are going to use the `KerasMetricCallback` from the `transformers` library to evaluate our model during training using `seqeval` and `evaluate`. The `KerasMetricCallback` allows us to compute metrics at the end of every epoch. It is particularly useful for common NLP metrics, like BLEU and ROUGE as well as for class specific `f1` scores, like `seqeval` provides.
```python
import evaluate
import numpy as np
# load seqeval metric
metric = evaluate.load("seqeval")
# labels of the model
ner_labels = list(model.config.id2label.values())
def compute_metrics(p):
predictions, labels = p
predictions = np.argmax(predictions, axis=-1)
all_predictions = []
all_labels = []
for prediction, label in zip(predictions, labels):
for predicted_idx, label_idx in zip(prediction, label):
if label_idx == -100:
continue
all_predictions.append(ner_labels[predicted_idx])
all_labels.append(ner_labels[label_idx])
res = metric.compute(predictions=[all_predictions], references=[all_labels])
return {
"overall_precision": res["overall_precision"],
"overall_recall": res["overall_recall"],
"overall_f1": res["overall_f1"],
"overall_accuracy": res["overall_accuracy"],
}
```
We can add all our callbacks to a list which we then provide to the `model.fit` method. We are using the following callbacks:
- `TensorBoard`: To log our training metrics to Tensorboard
- `PushToHubCallback`: To push our model weights and Tensorboard logs to the Hub
- `KerasMetricCallback`: To evaluate our model during training
```python
import os
from transformers.keras_callbacks import PushToHubCallback, KerasMetricCallback
from tensorflow.keras.callbacks import TensorBoard as TensorboardCallback
callbacks = []
callbacks.append(TensorboardCallback(log_dir=os.path.join(hp.output_dir, "logs")))
callbacks.append(KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_test_dataset))
if hp.hub_token:
callbacks.append(PushToHubCallback(output_dir=hp.output_dir, hub_model_id=hp.hub_model_id, hub_token=hp.hub_token))
```
Before we can start our training we have to create the optimizer and compile our model.
```python
import tensorflow as tf
from transformers import AdamWeightDecay
# create optimizer width weigh decay
optimizer = AdamWeightDecay(
learning_rate = hp.learning_rate,
weight_decay_rate = hp.weight_decay_rate,
)
# compile model
model.compile(optimizer=optimizer)
```
We can ttart training with calling `model.fit` providing the training and validation dataset, along with the hyperparameters, optimizer, metrics and callbacks we defined before.
```python
# train model
train_results = model.fit(
tf_train_dataset,
validation_data=tf_test_dataset,
callbacks=callbacks,
epochs=hp.num_train_epochs,
)
```
_results_
```bash
18/18 [==============================] - 47s 2s/step - loss: 1.6940 - val_loss: 1.4151 - overall_precision: 0.2686 - overall_recall: 0.2785 - overall_f1: 0.2735 - overall_accuracy: 0.5128
Epoch 2/8
18/18 [==============================] - 20s 1s/step - loss: 1.1731 - val_loss: 0.8665 - overall_precision: 0.5771 - overall_recall: 0.6101 - overall_f1: 0.5932 - overall_accuracy: 0.7267
Epoch 3/8
18/18 [==============================] - 40s 2s/step - loss: 0.7612 - val_loss: 0.6849 - overall_precision: 0.6362 - overall_recall: 0.7336 - overall_f1: 0.6814 - overall_accuracy: 0.7784
Epoch 4/8
18/18 [==============================] - 20s 1s/step - loss: 0.5630 - val_loss: 0.6265 - overall_precision: 0.6748 - overall_recall: 0.7592 - overall_f1: 0.7145 - overall_accuracy: 0.8017
Epoch 5/8
18/18 [==============================] - 40s 2s/step - loss: 0.4441 - val_loss: 0.6256 - overall_precision: 0.6935 - overall_recall: 0.7767 - overall_f1: 0.7328 - overall_accuracy: 0.8036
Epoch 6/8
18/18 [==============================] - 20s 1s/step - loss: 0.3641 - val_loss: 0.6402 - overall_precision: 0.7115 - overall_recall: 0.7772 - overall_f1: 0.7429 - overall_accuracy: 0.7940
Epoch 7/8
18/18 [==============================] - 40s 2s/step - loss: 0.2781 - val_loss: 0.6248 - overall_precision: 0.7176 - overall_recall: 0.7868 - overall_f1: 0.7506 - overall_accuracy: 0.8141
Epoch 8/8
18/18 [==============================] - 20s 1s/step - loss: 0.2280 - val_loss: 0.6532 - overall_precision: 0.7218 - overall_recall: 0.7878 - overall_f1: 0.7534 - overall_accuracy: 0.8144
```
Nice, we have trained our model. 🎉 The best score we achieved is an overall f1 score of `0.7534`.
![tensorboard](/static/blog/fine-tuning-layoutlm-keras/tensorboard.png)
After our training is done we also want to save our processor to the Hugging Face Hub and create a model card.
```python
# change apply_ocr to True to use the ocr text for inference
processor.feature_extractor.apply_ocr = True
# Save processor and create model card
model.push_to_hub(hp.hub_model_id, use_auth_token=hp.hub_token)
processor.push_to_hub(hp.hub_model_id, use_auth_token=hp.hub_token)
```
## 4. Run Inference
Now we have a trained model, we can use it to run inference. We will create a function that takes a document image and returns the extracted text and the bounding boxes.
```python
from transformers import TFLayoutLMForTokenClassification, LayoutLMv2Processor
from PIL import Image, ImageDraw, ImageFont
import tensorflow as tf
# load model and processor from huggingface hub
model = TFLayoutLMForTokenClassification.from_pretrained("philschmid/layoutlm-funsd-tf")
processor = LayoutLMv2Processor.from_pretrained("philschmid/layoutlm-funsd-tf")
# helper function to unnormalize bboxes for drawing onto the image
def unnormalize_box(bbox, width, height):
return [
width * (bbox[0] / 1000),
height * (bbox[1] / 1000),
width * (bbox[2] / 1000),
height * (bbox[3] / 1000),
]
label2color = {
"B-HEADER": "blue",
"B-QUESTION": "red",
"B-ANSWER": "green",
"I-HEADER": "blue",
"I-QUESTION": "red",
"I-ANSWER": "green",
}
# draw results onto the image
def draw_boxes(image, boxes, predictions):
width, height = image.size
normalizes_boxes = [unnormalize_box(box, width, height) for box in boxes]
# draw predictions over the image
draw = ImageDraw.Draw(image)
font = ImageFont.load_default()
for prediction, box in zip(predictions, normalizes_boxes):
if prediction == "O":
continue
draw.rectangle(box, outline="black")
draw.rectangle(box, outline=label2color[prediction])
draw.text((box[0] + 10, box[1] - 10), text=prediction, fill=label2color[prediction], font=font)
return image
# run inference
def run_inference(path, model=model, processor=processor, output_image=True):
# create model input
image = Image.open(path).convert("RGB")
encoding = processor(image, return_tensors="tf")
del encoding["image"]
# run inference
outputs = model(**encoding)
predictions = tf.squeeze(tf.argmax(outputs.logits, axis=-1)).numpy()
# get labels
labels = [model.config.id2label[prediction] for prediction in predictions]
if output_image:
return draw_boxes(image, encoding["bbox"][0], labels)
else:
return labels
run_inference(dataset['train'][40]["image_path"])
```
![result](/static/blog/fine-tuning-layoutlm-keras/result.png)
## Conclusion
We managed to successfully fine-tune our LayoutLM to extract information from forms. With only `149` training examples we achieved an overall f1 score of `0.7534`, which is impressive and another prove for the power of transfer learning.
Now its your time to integrate Transformers into your own projects. 🚀
---
Thanks for reading! If you have any questions, feel free to contact me, through [Github](https://github.com/huggingface/transformers), or on the [forum](https://discuss.huggingface.co/c/optimum/59). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Getting started with Transformers and TPU using PyTorch | https://www.philschmid.de/getting-started-tpu-transformers | 2023-01-16 | [
"Pytorch",
"BERT",
"HuggingFace",
"TPU"
] | Learn how to get started with Hugging Face Transformers and TPUs using PyTorch, fine-tune a BERT model for Text Classification using the newest Google Cloud TPUs. | Tensor Processing Units (TPU) are specialized accelerators developed by Google to speed up machine learning tasks. They are built from the ground up with a focus on machine & deep learning workloads.
TPUs are available on the [Google Cloud](https://cloud.google.com/tpu/docs/tpus) and can be used with popular deep learning frameworks, including [TensorFlow](https://www.tensorflow.org/), [JAX](https://jax.readthedocs.io/en/latest/notebooks/quickstart.html), and [PyTorch](https://pytorch.org/get-started/locally/).
This blog post will cover how to get started with Hugging Face Transformers and TPUs using PyTorch and [accelerate](https://huggingface.co/docs/accelerate/index). You will learn how to fine-tune a BERT model for Text Classification using the newest Google Cloud TPUs.
You will learn how to:
1. [Launch TPU VM on Google Cloud](#1-launch-tpu-vm-on-google-cloud)
2. [Setup Jupyter environment & install Transformers](#2-setup-jupyter-environment--install-transformers)
3. [Load and prepare the dataset](#3-load-and-prepare-the-dataset)
4. [Fine-tune BERT on the TPU with the Hugging Face `accelerate`](#4-fine-tune-bert-on-the-tpu-with-the-hugging-face-accelerate)
Before we can start, make sure you have a **[Hugging Face Account](https://huggingface.co/join)** to save artifacts and experiments.
## 1. Launch TPU VM on Google Cloud
The first step is to create a TPU development environment. We are going to use the [Google Cloud CLI](https://cloud.google.com/sdk/docs/install) `gcloud` to create a cloud TPU VM using PyTorch 1.13 image.
If you don´t have the `cloud` installed check out the [documentation](https://cloud.google.com/sdk/docs/install) or run the command below.
```bash
curl https://sdk.cloud.google.com | bash
exec zsh -l
gcloud init
```
We can now create our cloud TPU VM with our preferred region, project and version.
_Note: Make sure to have the [Cloud TPU API](https://console.cloud.google.com/compute/tpus/) enabled to create your Cloud TPU VM_
```bash
gcloud compute tpus tpu-vm create bert-example \
--zone=europe-west4-a \
--accelerator-type=v3-8 \
--version=tpu-vm-pt-1.13
```
## 2. Setup Jupyter environment & install Transformers
Our cloud TPU VM is now running, and we can ssh into it, but who likes to develop inside a terminal? We want to set up a **`Jupyter`** environment, which we can access through our local browser. For this, we need to add a port for forwarding in the `gcloud` ssh command, which will tunnel our localhost traffic to the cloud TPU.
```bash
gcloud compute tpus tpu-vm ssh bert-example \
--zone europe-west4-a \
-- -L 8080:localhost:8080
```
Before we can access our environment, we need to install `jupyter` and the Hugging Face Libraries, including `transformers` and `datasets`. Running the following command will install all the required packages.
```bash
pip3 install jupyter transformers datasets evaluate accelerate tensorboard scikit-learn --upgrade
# install specific markupsafe version to not break
pip3 markupsafe==2.0.1
```
We can now start our `jupyter` server.
```bash
python3 -m notebook --allow-root --port=8080
```
You should see a familiar `jupyter` output with a URL to the notebook.
`http://localhost:8080/?token=8c1739aff1755bd7958c4cfccc8d08cb5da5234f61f129a9`
We can click on it, and a `jupyter` environment opens in our local browser.
![jupyter](/static/blog/getting-started-tpu-transformers/jupyter.png)
We can now create a new notebook and test to see if we have access to the TPUs.
```python
import os
# make the TPU available accelerator to torch-xla
os.environ["XRT_TPU_CONFIG"]="localservice;0;localhost:51011"
import torch
import torch_xla.core.xla_model as xm
device = xm.xla_device()
t1 = torch.randn(3,3,device=device)
t2 = torch.randn(3,3,device=device)
print(t1 + t2)
# tensor([[-1.1846, -0.7140, -0.4168],
# [-0.3259, -0.5264, -0.8828],
# [-0.8562, -0.5813, 0.3264]], device='xla:1')
```
Awesome! 🎉 We can use our TPU with PyTorch. Let's get to our example.
**NOTE: make sure to restart your notebook to not longer allocate a TPU with the tensor we created!**
## 3. Load and prepare the dataset
We are training a Text Classification model on the [BANKING77](https://huggingface.co/datasets/banking77) dataset to keep the example straightforward. The BANKING77 dataset provides a fine-grained set of intents (classes) in a banking/finance domain. It comprises 13,083 customer service queries labeled with 77 intents. It focuses on fine-grained single-domain intent detection.
This is the same dataset we used for the [“Getting started with Pytorch 2.0 and Hugging Face Transformers”](https://www.philschmid.de/getting-started-pytorch-2-0-transformers), which will help us to compare the performance later.
We will use the `load_dataset()` method from the [🤗 Datasets](https://huggingface.co/docs/datasets/index) library to load the `banking77`.
```python
from datasets import load_dataset
# Dataset id from huggingface.co/dataset
dataset_id = "banking77"
# Load raw dataset
raw_dataset = load_dataset(dataset_id)
print(f"Train dataset size: {len(raw_dataset['train'])}")
print(f"Test dataset size: {len(raw_dataset['test'])}")
# Train dataset size: 10003
# Test dataset size: 3080
```
Let’s check out an example of the dataset.
```python
from random import randrange
random_id = randrange(len(raw_dataset['train']))
raw_dataset['train'][random_id]
# {'text': 'How can I change my PIN without going to the bank?', 'label': 21}
```
To train our model, we need to convert our "Natural Language" to token IDs. This is done by a Tokenizer, which tokenizes the inputs (including converting the tokens to their corresponding IDs in the pre-trained vocabulary) if you want to learn more about this, out [chapter 6](https://huggingface.co/course/chapter6/1?fw=pt) of the [Hugging Face Course](https://huggingface.co/course/chapter1/1).
Since TPUs expect a fixed shape of inputs, we need to make sure to truncate or pad all samples to the same length.
```python
from transformers import AutoTokenizer
# Model id to load the tokenizer
model_id = "bert-base-uncased"
# Load Tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id)
# Tokenize helper function
def tokenize(batch):
return tokenizer(batch['text'], padding='max_length', truncation=True,return_tensors="pt")
# Tokenize dataset
raw_dataset = raw_dataset.rename_column("label", "labels") # to match Trainer
tokenized_dataset = raw_dataset.map(tokenize, batched=True, remove_columns=["text"])
tokenized_dataset = tokenized_dataset.with_format("torch")
print(tokenized_dataset["train"].features.keys())
# dict_keys(['input_ids', 'token_type_ids', 'attention_mask','lable'])
```
We are using Hugging Face [accelerate](https://huggingface.co/docs/accelerate/index) to train our model in this example. [Accelerate](https://huggingface.co/docs/accelerate/index) is a library to easily write PyTorch training loops for agnostic Hardware setups, which makes it super easy to write TPU training methods without the need to know any XLA features.
## 4. Fine-tune BERT on the TPU with the Hugging Face `accelerate`
[Accelerate](https://huggingface.co/docs/accelerate/index) is enables PyTorch users run PyTorch training across any distributed configuration by adding just four lines of code! Built on `torch_xla` and `torch.distributed`, 🤗 Accelerate takes care of the heavy lifting, so you don’t have to write any custom code to adapt to these platforms.
Accelerate implements a [notebook launcher](https://huggingface.co/docs/accelerate/basic_tutorials/notebook), which allows you to easily start your training jobs from a notebook cell rather than needing to use `torchrun` or other launcher, which makes experimenting so much easier, since we can write all the code in the notebook rather than the need to create long and complex python scripts. We are going to use the `notebook_launcher` which will allow us to skip the `accelerate config` command, since we define our environment inside the notebook.
The two most important things to remember for training on TPUs is that the `accelerator` object has to be defined inside the `training_function`, and your model should be created outside the training function.
We will load our model with the `AutoModelForSequenceClassification` class from the [Hugging Face Hub](https://huggingface.co/bert-base-uncased). This will initialize the pre-trained BERT weights with a classification head on top. Here we pass the number of classes (77) from our dataset and the label names to have readable outputs for inference.
```python
from transformers import AutoModelForSequenceClassification
# Model id to load the tokenizer
model_id = "bert-base-uncased"
# Prepare model labels - useful for inference
labels = tokenized_dataset["train"].features["labels"].names
num_labels = len(labels)
label2id, id2label = dict(), dict()
for i, label in enumerate(labels):
label2id[label] = str(i)
id2label[str(i)] = label
# Download the model from huggingface.co/models
model = AutoModelForSequenceClassification.from_pretrained(
model_id, num_labels=num_labels, label2id=label2id, id2label=id2label
)
```
We evaluate our model during training. We use the `evaluate` library to calculate the [f1 metric](https://huggingface.co/spaces/evaluate-metric/f1) during training on our test split.
```python
import evaluate
import numpy as np
# Metric Id
metric = evaluate.load("f1")
```
We can now write our `train_function`. If you want to learn more about how to adjust a basic PyTorch training loop to `accelerate` you can take a look at the [Migrating your code to 🤗 Accelerate guide](https://huggingface.co/docs/accelerate/basic_tutorials/migration)**.**
We are using a magic cell `%%writefile` to write the `train_function` to an external `train.py` module to properly use it in `ipython`. The `train.py` module also includes a `create_dataloaders` method, which will be used to create our `DataLoaders` for training using the tokenized dataset.
```python
%%writefile train.py
from datasets import load_dataset, load_metric
from accelerate import Accelerator
from transformers import (
AdamW,
get_linear_schedule_with_warmup
)
from tqdm.auto import tqdm
import datasets
import transformers
import torch
from torch.utils.data import DataLoader
def create_dataloaders(tokenized_dataset, train_batch_size=8, eval_batch_size=32):
train_dataloader = DataLoader(
tokenized_dataset["train"], shuffle=True, batch_size=train_batch_size
)
eval_dataloader = DataLoader(
tokenized_dataset["test"], shuffle=False, batch_size=eval_batch_size
)
return train_dataloader, eval_dataloader
def training_function(model,hyperparameters,metric,tokenized_dataset):
# Initialize accelerator with bf16
accelerator = Accelerator()# mixed_precision="bf16")
# To have only one message (and not 8) per logs of Transformers or Datasets, we set the logging verbosity
if accelerator.is_main_process:
datasets.utils.logging.set_verbosity_warning()
transformers.utils.logging.set_verbosity_info()
else:
datasets.utils.logging.set_verbosity_error()
transformers.utils.logging.set_verbosity_error()
train_dataloader, eval_dataloader = create_dataloaders(
tokenized_dataset,train_batch_size=hyperparameters["per_tpu_train_batch_size"], eval_batch_size=hyperparameters["per_tpu_eval_batch_size"]
)
# Instantiate optimizer
optimizer = AdamW(params=model.parameters(), lr=hyperparameters["learning_rate"])
# Prepare everything
model, optimizer, train_dataloader, eval_dataloader = accelerator.prepare(
model, optimizer, train_dataloader, eval_dataloader
)
num_epochs = hyperparameters["num_epochs"]
# Instantiate learning rate scheduler
lr_scheduler = get_linear_schedule_with_warmup(
optimizer=optimizer,
num_warmup_steps=100,
num_training_steps=len(train_dataloader) * num_epochs,
)
# Add a progress bar to keep track of training.
progress_bar = tqdm(range(num_epochs * len(train_dataloader)), disable=not accelerator.is_main_process)
# Now we train the model
for epoch in range(num_epochs):
model.train()
for step, batch in enumerate(train_dataloader):
outputs = model(**batch)
loss = outputs.loss
accelerator.backward(loss)
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
progress_bar.update(1)
# run evaluation after the training epoch
model.eval()
all_predictions = []
all_labels = []
for step, batch in enumerate(eval_dataloader):
with torch.no_grad():
outputs = model(**batch)
predictions = outputs.logits.argmax(dim=-1)
# We gather predictions and labels from the 8 TPUs to have them all.
all_predictions.append(accelerator.gather(predictions))
all_labels.append(accelerator.gather(batch["labels"]))
# Concatenate all predictions and labels.
all_predictions = torch.cat(all_predictions)[:len(tokenized_dataset["test"])]
all_labels = torch.cat(all_labels)[:len(tokenized_dataset["test"])]
eval_metric = metric.compute(predictions=all_predictions, references=all_labels, average="weighted")
accelerator.print(f"epoch {epoch}:", eval_metric)
```
The last step is to define the `hyperparameters` we use for our training.
```python
hyperparameters = {
"learning_rate": 3e-4,
"num_epochs": 3,
"per_tpu_train_batch_size": 32, # Actual batch size will this x 8
"per_tpu_eval_batch_size": 8, # Actual batch size will this x 8
}
```
And we're ready for launch! It's super easy with the `notebook_launcher` from the Accelerate library.
```python
from train import training_function
from accelerate import notebook_launcher
import os
# set environment variable to spawn xmp
# https://github.com/huggingface/accelerate/issues/967
os.environ["KAGGLE_TPU"] = "yes" # adding a fake env to launch on TPUs
os.environ["TPU_NAME"] = "dummy"
# make the TPU available accelerator to torch-xla
os.environ["XRT_TPU_CONFIG"]="localservice;0;localhost:51011"
# args
args = (model, hyperparameters, metric, tokenized_dataset)
# launch training
notebook_launcher(training_function, args)
# epoch 0: {'f1': 0.28473517320655745}
# epoch 1: {'f1': 0.814198544360063}
# epoch 2: {'f1': 0.915311713296595}
```
_Note: You may notice that training seems exceptionally slow at first. This is because TPUs first run through a few batches of data to see how much memory to allocate before utilizing this configured memory allocation extremely efficiently._
We are using 8x `v3` TPUs with a global batch size of `256`, achieving `481 train_samples_per_second`
The training with compilation and evaluation took `220` seconds and achieved an `f1` score of `0.915`.
## Conclusion
In this tutorial, we learned how to train a BERT model for text classification model with the BANKING77 dataset on Google Cloud TPUs. Hugging Face accelerate allows you to easily run any PyTorch training loop on TPUs with minimal code changes.
We compared our training with the results of the [“Getting started with Pytorch 2.0 and Hugging Face Transformers”](https://www.philschmid.de/getting-started-pytorch-2-0-transformers), which uses the Hugging Face Trainer and Pytorch 2.0 on NVIDIA A10G GPU. The TPU accelerate version delivers a 200% reduction in training time for us to fine-tune BERT within 3,5 minutes for less than 0,5$.
Moving your training to TPUs can help increase the iteration and speed of your models and data science teams.
---
Thanks for reading! If you have any questions, feel free to contact me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Deep Learning setup made easy with EC2 Remote Runner and Habana Gaudi | https://www.philschmid.de/habana-gaudi-ec2-runner | 2022-07-26 | [
"BERT",
"Habana",
"HuggingFace",
"AWS"
] | Learn how to migrate your training jobs to a Habana Gaudi-based DL1 instance on AWS using EC2 Remote Runner. | Going from experimenting and preparation in a local environment to managed cloud infrastructure is often times too complex and prevents data scientists from iterating quickly and efficiently on their Deep Learning projects and work.
A common workflow I see is that a DS/MLE starts virtual machines in the cloud, ssh into the machine, and does all of the experiments there. This has at least two downsides.
1. Requires a lot of work and experience to start those cloud-based instances (selecting the right environment, CUDA version…), which can lead to bad developer experiences and some unpreventable situations like forgetting to stop an instance.
2. The compute resources might not be efficiently used. In Deep Learning you use most of the time GPU-backed instances, which can cost up to 40$/h. But since not all operations require a GPU, such as dataset preparation or tokenization (in NLP) a lot of money can be wasted.
Two overcome that downside we create a small package called “[Remote Runner](https://github.com/philschmid/deep-learning-remote-runner)”.
[Remote Runner](https://github.com/philschmid/deep-learning-remote-runner) is an easy pythonic way to migrate your python training scripts from a local environment to a powerful cloud-backed instance to efficiently scale your training, save cost & time, and iterate quickly on experiments in a parallel containerized way.
## How does Remote Runner work?
Remote Runner takes care of all of the heavy liftings for you:
1. Creating all required cloud resources
2. Migrating your script to the remote machine
3. Executing your script
4. Making sure the instance is terminated again.
![Remote Runner.png](/static/blog/habana-gaudi-ec2-runner/remote-runner.png)
Let's give it a try. 🚀
Our goal for the example is to fine-tune a Hugging Face Transformer model using the Habana Gaudi-based **[DL1 instance](https://aws.amazon.com/ec2/instance-types/dl1/)** on AWS to take advantage of the cost performance benefits of Gaudi.
## Managed Deep Learning with Habana Gaudi
In this example, you learn how to migrate your training jobs to a Habana Gaudi-based **[DL1 instance](https://aws.amazon.com/ec2/instance-types/dl1/)** on AWS. Habana Gaudi is a new deep learning training processor for cost-effective, high-performance training promising up to 40% better price-performance than comparable GPUs, which is available on AWS.
We created the past already a blog post on how to **[Setup Deep Learning environment for Hugging Face Transformers with Habana Gaudi on AWS](https://www.philschmid.de/getting-started-habana-gaudi),** which I recommend you take a look to understand how much “manual” work was needed to use the DL1 instances on AWS for your workflows.
In the following example we will cover the:
1. [Requirements & Setup](#1-requirements--setup)
and then
2. [Run a text-classification training job on Habana Gaudi DL1](#2-run-a-text-classification-training-job-on-habana-gaudi-dl1)
### 1. Requirements & Setup
Before we can start make sure you have met the following requirements
- AWS Account with quota for **[DL1 instance type](https://aws.amazon.com/ec2/instance-types/dl1/)**
- AWS IAM user **[configured in CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html)** with permission to create and manage EC2 instances. You can find all permissions needed [here](https://github.com/philschmid/deep-learning-remote-runner#getting-started).
After all, requirements are fulfilled we can begin by installing [Remote-Runner](https://github.com/philschmid/deep-learning-remote-runner#getting-started) and the Hugging Face Hub library. We will use the [Hugging Face Hub](https://huggingface.co/) as the model versioning backend. This allows us to checkpointing, log and track our metrics during training, but simply provide one API token.
```bash
pip install rm-runner huggingface_hub
```
### 2. Run a text-classification training job on Habana Gaudi DL1
We will use an [example](https://github.com/philschmid/deep-learning-remote-runner/tree/main/examples) from the Remote Runner repository. The example will fine-tune a [DistilBERT](https://www.notion.so/Deep-Learning-Setup-made-easy-with-EC2-Remote-runner-and-Habana-Gaudi-685f715094de4673b6078768030cc9c6) model on the [IMDB](https://www.notion.so/Deep-Learning-Setup-made-easy-with-EC2-Remote-runner-and-Habana-Gaudi-685f715094de4673b6078768030cc9c6) dataset.
_Note: Check out the [Remote Runner](https://github.com/philschmid/deep-learning-remote-runner/tree/main/examples) repository it includes several different examples_
As first we need to clone the repository and change the directory into examples
```bash
git clone https://github.com/philschmid/deep-learning-remote-runner.git && cd deep-learning-remote-runner/examples
```
The `habana_text_classification.py` script uses the new `GaudiTrainer` from [optimum-habana](https://huggingface.co/docs/optimum/main/en/habana_index) to leverage Habana Gaudi and provides an interface identical to the transformers `Trainer`.
As next, we can adjust the hyperparameters in the `habana_example.py`.
```python
from rm_runner import EC2RemoteRunner
from huggingface_hub import HfFolder
# hyperparameters
hyperparameters = {
"model_id": "distilbert-base-uncased",
"dataset_id": "imdb",
"save_repository_id": "distilbert-imdb-remote-runner",
"hf_hub_token": HfFolder.get_token(), # need to be login in with `huggingface-cli login`
"num_train_epochs": 3,
"per_device_train_batch_size": 64,
"per_device_eval_batch_size": 32,
}
hyperparameters_string = " ".join(f"--{key} {value}" for key, value in hyperparameters.items())
```
You can for example change the `model_id` to your preferred checkpoint from [Hugging Face](https://huggingface.co/models). The next step is to create our `EC2RemoteRunner` instance. The `EC2RemoteRunner` defines our cloud environment including the `instance_type`, `region`, and which credentials we want to use. You can also define a custom container, which should be used to execute your training, e.g. if you want to include additional dependencies.
_the default container for Habana is the `huggingface/optimum-habana:latest` one._
We are going to use the `dl1.24xlarge` instance in `us-east-1`. For profile enter you configured profile.
```bash
# create ec2 remote runner
runner = EC2RemoteRunner(
instance_type="dl1.24xlarge",
profile="hf-sm",
region="us-east-1"
)
```
As next, we need to define the “launch” arguments which are the execute`command` and `source_dir` (will be uploaded via SCP).
```python
# launch my script with gaudi_spawn for distributed training
runner.launch(
command=f"python3 gaudi_spawn.py --use_mpi --world_size=8 habana_text_classification.py {hyperparameters_string}",
source_dir="scripts",
)
```
Now we can launch our remote training by executing our python file.
```bash
python habana_example.py
```
You should see a similar output to the one below.
```bash
2022-07-25 08:36:39,505 | INFO | Found credentials in shared credentials file: ~/.aws/credentials
2022-07-25 08:36:46,883 | INFO | Created key pair: rm-runner-agld
2022-07-25 08:36:47,778 | INFO | Created security group: rm-runner-agld
2022-07-25 08:36:49,244 | INFO | Launched instance: i-087d5654841eb09e8
2022-07-25 08:36:49,247 | INFO | Waiting for instance to be ready...
2022-07-25 08:37:05,307 | INFO | Instance is ready. Public DNS: ec2-3-84-58-86.compute-1.amazonaws.com
2022-07-25 08:37:05,334 | INFO | Setting up ssh connection...
2022-07-25 08:38:25,360 | INFO | Setting up ssh connection...
2022-07-25 08:38:49,480 | INFO | Setting up ssh connection...
2022-07-25 08:38:49,694 | INFO | Connected (version 2.0, client OpenSSH_8.2p1)
2022-07-25 08:38:50,711 | INFO | Authentication (publickey) successful!
2022-07-25 08:38:50,711 | INFO | Pulling container: huggingface/optimum-habana:latest...
```
Remote Runner will log all steps it takes to launch your training as well as all of the outputs during the training to the terminal. At the end of the training, you'll see a summary of your training job with the duration each step took and an estimated cost.
```bash
2022-07-25 10:22:38,851 | INFO | Terminating instance: i-091459df0356ae68b
2022-07-25 10:24:41,764 | INFO | Deleting security group: rm-runner-mjcf
2022-07-25 10:24:43,093 | INFO | Deleting key: rm-runner-mjcf
2022-07-25 10:24:44,055 | INFO | Total time: 594s
2022-07-25 10:24:44,055 | INFO | Startup time: 174s
2022-07-25 10:24:44,056 | INFO | Execution time: 296s
2022-07-25 10:24:44,056 | INFO | Termination time: 124s
2022-07-25 10:24:44,056 | INFO | Estimated cost: $1.71
```
We can see that our training finished successfully and took in a total of 594s and cost $1.71 due to the use of Habana Gaudi.
We can now check the results and the model on the Hugging Face Hub. For me it is [philschmid/distilbert-imdb-habana-remote-runner](https://huggingface.co/philschmid/distilbert-imdb-habana-remote-runner)
![tensorboard](/static/blog/habana-gaudi-ec2-runner/tensorboard.png)
## Conclusion
Remote Runner helps you to easily migrate your training to the cloud and to experiment fast on different instance types.
With the support for custom deep learning chips, it makes it easy to migrate from more expensive and slower instances to faster more optimized ones, e.g. Habana Gaudi DL1. If you are interested in why you should migrate GPU workload to Habana Gaudi checkout: [Hugging Face Transformers and Habana Gaudi AWS DL1 Instances](https://www.philschmid.de/habana-distributed-training)
If you run into an issue with Remote Runner or have feature requests don't hesitate to open an [issue on Github.](https://github.com/philschmid/deep-learning-remote-runner/issues)
Also, make sure to check out the [optimum-habana](https://github.com/huggingface/optimum-habana/tree/main/examples) repository.
---
Thanks for reading! If you have any questions, feel free to contact me, through [Github](https://github.com/huggingface/transformers), or on the [forum](https://discuss.huggingface.co/c/optimum/59). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |