title
stringlengths 34
95
| url
stringlengths 39
106
| date
stringlengths 10
10
| tags
sequence | summary
stringlengths 66
380
| content
stringlengths 4.93k
25.2k
|
---|---|---|---|---|---|
Google Colab the free GPU/TPU Jupyter Notebook Service | https://www.philschmid.de/google-cola-the-free-gpu-jupyter | 2020-02-26 | [
"Machine Learning"
] | A Short Introduction to Google Colab as a free Jupyter notebook service from Google. Learn how to use Accelerated Hardware like GPUs and TPUs to run your Machine learning completely for free in the cloud. | ## What is Google Colab
**Google Colaboratory** or „Colab“ for short is a free Jupyter notebook service from Google. It requires no setup and
runs entirely in the cloud. In Google Colab you can write, execute, save and share your Jupiter Notebooks. You access
powerful computing resources like TPUs and GPUs all for free through your browser. All major Python libraries like
[Tensorflow](https://www.tensorflow.org/), [Scikit-learn](https://scikit-learn.org/), [PyTorch](https://pytorch.org/),
[Pandas](https://pandas.pydata.org/), etc. are pre-installed. Google Colab requires no configuration, you only need a
Google Account and then you are good to go. Your notebooks are stored in your [Google Drive](https://drive.google.com/),
or can be loaded from [GitHub](https://github.com/). Colab notebooks can be shared just as you would with Google Docs or
Sheets. Simply click the Share button at the top right of any Colab notebook, or follow these Google Drive
[file sharing instructions](https://support.google.com/drive/answer/2494822?co=GENIE.Platform%3DDesktop&hl=en).
For more informations you can look into the official FAQ from Google Research. You can find the FAQ under
[Colaboratory – Google](https://research.google.com/colaboratory/faq.html) or you can have a look at the introduction
video [Get started with Google Colaboratory (Coding TensorFlow) - YouTube](https://www.youtube.com/watch?v=inN8seMm7UI)
## Is it free?
**Yes, it is completely free of charge** you only need a Google account, which probably all of you have. You can use the
CPU-, GPU- & TPU-Runtime completely for free. Google also offer in some cases the opportunity to extend the runtime into
one with 25GB of memory completely for free.
Recently Google Introduced „Colab Pro“ which is a paid version for \$9.99/month. With „Colab Pro“ you have prior access
to GPU and TPUs and also higher memory. You can be up to 24 hours connected to your notebooks in comparison in the free
version the connection limit is 12h per day. For more information read here:
[Google Colab Pro](https://colab.research.google.com/signup?utm_source=faq&utm_medium=link&utm_campaign=why_arent_resources_guaranteed).
## Ressources and Runtimes
| Type | Size |
| ------ | ------------------------------------- |
| CPU | 2x |
| Memory | 12.52GB |
| GPU | T4 with 7,98GB or K80 with 11,97GB |
| TPUv2 | 8units |
| Disk | at least 25GB will increase as needed |
## How to use accelerated hardware
Changing hardware runtime is as easy as it could get. You just have to navigate from „Runtime“ -> „change runtime type“
and select your preferred accelerated hardware type GPU or TPU.
![change-runtime](/static/blog/google-cola-the-free-gpu-jupyter/change-runtime.gif)
## How to get started
In the following section, I will describe and show some of the best features Google Colab has to offer. I created a
[Colab Notebook](https://colab.research.google.com/drive/1nwJ0BQjZACbGbt-AfG93LjJNT05mp4gw) with all of the features for
you to lookup.
### Creating a Colab Notebook
You can create a Colab notebook directly in the [Colab Environment](https://colab.research.google.com/) or through your
Google Drive.
### Access your google drive storage in your Colab notebook by mounting it
If you want to mount your Google Drive to your notebook you simply have to execute the snippet below. After you executed
it, you will see a URL where you have to login to your Google account and authorize Google Colab to access your Drive
Storage. Afterward, you can copy the key from the link into the displayed input-field in the Colab notebook.
```python
from google.colab import drive
drive.mount('/content/drive/')
```
You can show your files with `!ls /content/drive` or use the navigation on the left side.
### Upload and Download local files to your Colab notebook
You can easily upload and download files from your local machine by executing `files.upload()`, which creates a HTML
file input-field and `files.download()`.
#### Upload
```python
from google.colab import files
uploaded = files.upload()
```
#### Dowlnload
```python
from google.colab import files
files.download("File Name")
```
##### Download a complete folder by zipping it
```python
from google.colab import files
import zipfile
import sys
foldername = "test_folder"
zipfile.ZipFile(f'{foldername}.zip', 'w', zipfile.ZIP_DEFLATED)
files.download(f'{foldername}.zip')
```
### Change your directory permanently
You can change your directory permanently from `/content` to something you like by executing `%cd path` in a cell. This
is very useful if you clone your git repository into your colab notebook.
```python
%cd path
```
### Show an image in Colab
You can show pictures inline as you do it in jupyter with this simple snippet
```python
from IPython.display import Image, display
display(Image('image.jpg'))
```
### Advanced Pandas table
Google Colab offers an improved view of data frames in addition to the normal, familiar jupyter notebook view, in where
you can filter columns directly without using python. You can even search for a range in date fields. You can use it by
executing one line of code.
```python
%load_ext google.colab.data_table
```
![pandas-extended-view](/static/blog/google-cola-the-free-gpu-jupyter/pandas-extended-view.jpg)
### How to use git in Colab
Google Colab provides a lot of benefits, but one downside is you have to save your notebook to your google drive.
Normally you use some kind of git tool. The easiest way to overcome this problem is either by copying your notebook from
GitHub into your Colab Environment with this
[easy copy integration for notebooks](https://colab.research.google.com/github/) or you can use CLI commands to load
your private and public repository into your git. The only problem with using GitHub Repositories in your Colab is you
cannot push back to your Repository, you have to save it manually („File“ -> „save a Copy as Github Gist“ or „Save a
copy on Github“). If you want to integrate your repository directly you have to set up git in your Colab environment
like you normally do on your local machine. Chella wrote an article in Towards Data Science on how to do it.
[From Git to Colab, via SSH - Towards Data Science](https://towardsdatascience.com/using-git-with-colab-via-ssh-175e8f3751ec)
```
# git clone private repository
!git clone https://username:password@github.com/username/repository.git
# git clone public repository
!git clone https://github.com/fastai/courses.git
```
An extra tip is after you cloned your repository, you can permanently change directory to the repository by executing
`%cd /content/your_repostiory_name`. After that, every cell will be executed in the directory of your repository.
### Execute CLI commands
You can execute CLI commands for example, to install python package, update python package or run bash scripts just by
putting `!` before the command.
```
!pip install fastai
```
### Customize Shortcuts and changing Theme
You can customize Shortcuts by navigating from "Tools“ -> „Keyboard shortcuts…“ or if you want to change your theme you
must navigate from „Tools“ -> „Settings“ and under „Site“ you can change it.
![customizing_shortcuts_and_changing_theme](/static/blog/google-cola-the-free-gpu-jupyter/customizing_shortcuts_and_changing_theme.gif)
---
Thanks for reading my first blog post about Google Colaboratory.
See you soon 😊 |
Hugging Face Transformers Examples | https://www.philschmid.de/huggingface-transformers-examples | 2023-01-26 | [
"HuggingFace",
"Transformers",
"BERT",
"PyTorch"
] | Learn how to leverage Hugging Face Transformers to easily fine-tune your models. | <html class="max-w-none pt-6 pb-8 font-serif " itemscope itemtype="https://schema.org/FAQPage">
Machine learning and the adoption of the Transformer architecture are rapidly growing and will revolutionize the way we live and work. From self-driving cars to personalized medicine, the applications of [Transformers](https://huggingface.co/docs/transformers/index) are limitless. In this blog post, we'll explore how the leverage and explore examples for [Hugging Face Transformers](https://huggingface.co/docs/transformers/index) from natural language processing to computer vision. Whether you're new to [Hugging Face Transformers](https://huggingface.co/docs/transformers/index) or an expert, this post is sure to provide valuable insights and inspiration.
We will learn about the following:
1. [What is Hugging Face Transformers?](#what-is-hugging-face-transformers)
2. [What are Transformers’ examples?](#what-are-transformers-examples)
3. [How to use Transformers examples?](#how-to-use-transformers-examples)
4. [How to use your own data?](#how-to-use-your-own-data)
<div itemscope itemprop="mainEntity" itemtype="https://schema.org/Question">
<h2 itemprop="name" id="what-is-hugging-face-transformers">What is Hugging Face Transformers?</h2>
<div itemscope itemprop="acceptedAnswer" itemtype="https://schema.org/Answer">
<div itemprop="text">
[Hugging Face Transformers](https://huggingface.co/docs/transformers/index) is a Python library of pre-trained state-of-the-art machine learning models for natural language processing, computer vision, speech, or multi-modalities. [Transformers](https://huggingface.co/docs/transformers/index) provides access to popular Transformer architecture, including BERT, GPT2, RoBERTa, VIT, Whisper, Wav2vec2, T5, LayoutLM, and CLIP. These models support common tasks in different modalities, such as:
📝 Natural Language Processing: text classification, named entity recognition, question answering, language modeling, summarization, translation, multiple choice, and text generation.
🖼️ Computer Vision: image classification, object detection, and segmentation.
🗣️ Audio: automatic speech recognition and audio classification.
🐙 Multimodal: table question answering, optical character recognition, information extraction from scanned documents, video classification, and visual question answering.
The library can be used with the PyTorch, TensorFlow, or Jax framework and allows users to easily fine-tune or use the pre-trained models on their own data. If you are new to Hugging Face [Transformers](https://huggingface.co/docs/transformers/index), check out the completely free Hugging face course at: [https://huggingface.co/course](https://huggingface.co/course/chapter1/1)
</div>
</div>
</div>
<div itemscope itemprop="mainEntity" itemtype="https://schema.org/Question">
<h2 itemprop="name" id="what-are-transformers-examples">What are Transformers’ examples?</h2>
<div itemscope itemprop="acceptedAnswer" itemtype="https://schema.org/Answer">
<div itemprop="text">
As we know, [Transformers](https://huggingface.co/docs/transformers/index) can be used to fine-tune models like BERT, but did you know that the [GitHub repository](https://github.com/huggingface/transformers) of transformers provides over 20 ready-to-use [examples](https://github.com/huggingface/transformers/tree/main/examples)?
[Hugging Face Transformers](https://huggingface.co/docs/transformers/index) examples are maintained Python scripts to fine-tune or pre-train [Transformers](https://huggingface.co/docs/transformers/index) models. Currently, there are examples available for:
- [Audio Classification](https://github.com/huggingface/transformers/tree/main/examples/pytorch/audio-classification)
- [Contrastive Image Text](https://github.com/huggingface/transformers/tree/main/examples/pytorch/contrastive-image-text)
- [Image Classification](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification)
- [Image Pretraining](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-pretraining)
- [Language Modeling](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling)
- [Multiple Choice](https://github.com/huggingface/transformers/tree/main/examples/pytorch/multiple-choice)
- [Question Answering](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering)
- [Semantic Segmentation](https://github.com/huggingface/transformers/tree/main/examples/pytorch/semantic-segmentation)
- [Speech Pretraining](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-pretraining)
- [Speech Recognition](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition)
- [Summarization](https://github.com/huggingface/transformers/tree/main/examples/pytorch/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification)
- [Text Generation](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-generation)
- [Token Classification](https://github.com/huggingface/transformers/tree/main/examples/pytorch/token-classification)
- [Translation](https://github.com/huggingface/transformers/tree/main/examples/pytorch/translation)
Example scripts can be used [TensorFlow](https://github.com/huggingface/transformers/tree/main/examples/tensorflow), [PyTorch](https://github.com/huggingface/transformers/tree/main/examples/pytorch), or [JAX/Flax](https://github.com/huggingface/transformers/tree/main/examples/flax). As the name "examples" suggests, these are examples to help [Transformers](https://huggingface.co/docs/transformers/index) users to get started quickly, serve as an inspiration and help to create your own scripts, and enable users to run tests.
</div>
</div>
</div>
<div itemscope itemprop="mainEntity" itemtype="https://schema.org/Question">
<h2 itemprop="name" id="how-to-use-transformers-examples">How to use Transformers examples?</h2>
<div itemscope itemprop="acceptedAnswer" itemtype="https://schema.org/Answer">
<div itemprop="text">
Each release of [Transformers](https://huggingface.co/docs/transformers/index) has its own set of examples script, which are tested and maintained. This is important to keep in mind when using `examples/` since if you try to run an example from, e.g. a newer version than the `transformers` version you have installed it might fail. All examples provide documentation in the repository with a README, which includes documentation about the feature of the example and which arguments are supported. All `examples` provide an identical set of arguments to make it easy for users to switch between tasks. Now, let's get started.
### 1. Setup Development Environment
Our first step is to install the Hugging Face Libraries, including `transformers` and `datasets`. The version of `transformers` we install will be the version of the examples we are going to use. If you have `transformers` already installed, you need to check your version.
```bash
pip install torch
pip install "transformers==4.25.1" datasets --upgrade
```
### 2. Download the example script
The example scripts are stored in the [GitHub repository](https://github.com/huggingface/transformers) of transformers. This means we need first to clone the repository and then checkout the release of the `transformers` version we have installed in step 1 (for us, `4.25.1`)
```bash
git clone https://github.com/huggingface/transformers
cd transformers
git checkout tags/v4.25.1 # change 4.25.1 to your version if different
```
### 3. Fine-tune BERT for text-classification
Before we can run our script we first need to define the arguments we want to use. For `text-classification` we need at least a `model_name_or_path` which can be any supported architecture from the [Hugging Face Hub](https://huggingface.co) or a local path to a `transformers` model. Additional parameter we will use are:
- `dataset_name` : an ID for a dataset hosted on the [Hugging Face Hub](https://huggingface.co/datasets)
- `do_train` & `do_eval`: to train and evaluate our model
- `num_train_epochs`: the number of epochs we use for training.
- `per_device_train_batch_size`: the batch size used during training per GPU
- `output_dir`: where our trained model and logs will be saved
You can find a full list of supported parameter in the [script](https://github.com/huggingface/transformers/blob/6f3faf3863defe394e566c57b7d1ad3928c4ef49/examples/pytorch/text-classification/run_glue.py#L71). Before we can run our script we have to make sure all dependencies needed for the example are installed. Every example script which requires additional dependencies then `transformers` and `datasets` provides a `requirements.txt` in the directory, which can try to install.
```bash
pip install -r examples/pytorch/text-classification/requirements.txt
```
Thats it, now we can run our script from a CLI, which will start training BERT for `text-classification` on the `emotion` dataset.
```bash
python3 examples/pytorch/text-classification/run_glue.py \
--model_name_or_path bert-base-cased \
--dataset_name emotion \
--do_train \
--do_eval \
--per_device_train_batch_size 32 \
--num_train_epochs 3 \
--output_dir /bert-test
```
### 4. Fine-tune BART for summarization
In 3. we learnt how easy it is to leverage the `examples` fine-tun a BERT model for `text-classification`. In this section we show you how easy it to switch between different tasks. We will now fine-tune BART for summarization on the [CNN dailymail dataset](https://huggingface.co/datasets/cnn_dailymail). We will provide the same arguments than for `text-classification`, but extend it with:
- `dataset_config_name` to use a specific version of the dataset
- `text_column` the field in our dataset, which holds the text we want to summarize
- `summary_column` the field in our dataset, which holds the summary we want to learn.
Every example script which requires additional dependencies then `transformers` and `datasets` provides a `requirements.txt` in the directory, which can try to install.
```bash
pip install -r examples/pytorch/summarization/requirements.txt
```
Thats it, now we can run our script from a CLI, which will start training BERT for `text-classification` on the `emotion` dataset.
```bash
python3 examples/pytorch/summarization/run_summarization.py \
--model_name_or_path facebook/bart-base \
--dataset_name cnn_dailymail \
--dataset_config_name "3.0.0" \
--text_column "article" \
--summary_column "highlights" \
--do_train \
--do_eval \
--per_device_train_batch_size 32 \
--num_train_epochs 3 \
--output_dir /bert-test
```
</div>
</div>
</div>
<div itemscope itemprop="mainEntity" itemtype="https://schema.org/Question">
<h2 itemprop="name" id="how-to-use-your-own-data">How to use your own data?</h2>
<div itemscope itemprop="acceptedAnswer" itemtype="https://schema.org/Answer">
<div itemprop="text">
In the previous section we learned how to use Transformers examples with using datasets available on the [Hugging Face Hub](https://huggingface.co/datasets), but that's not all you can do. Hugging Face Transformers example support for local CSV and JSON files you can use for training your models. In this section we see how we can use a local CSV file using our `text-classification`example.
This section assumes that you completed step 1 & 2 from the “How to use Transformers examples?” section.
To be able to use local data files we will provide the same arguments than for `text-classification`, but extend it with:
- `train_file` path pointing to a local CSV or JSONLINES file with your training data
- `validation_file`path pointing to a local CSV or JSONLINES file with your training data
Both file should have the fields `text` which includes our data and `label` which holds the class label for the `text`.
```bash
pip install -r examples/pytorch/text-classification/requirements.txt
```
Thats it, now we can run our script from a CLI, which will start training BERT for `text-classification` on the `emotion` dataset.
```bash
python3 examples/pytorch/text-classification/run_glue.py \
--model_name_or_path bert-base-cased \
--train_file local/path/train.csv \
--validation_filepath local/path/eval.csv \
--do_train \
--do_eval \
--per_device_train_batch_size 32 \
--num_train_epochs 3 \
--output_dir /bert-test
```
</div>
</div>
</div>
</html>
---
Thanks for reading! If you have any questions, feel free to contact me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
BERT Text Classification in a different language | https://www.philschmid.de/bert-text-classification-in-a-different-language | 2020-05-22 | [
"NLP",
"Bert",
"HuggingFace"
] | Build a non-English (German) BERT multi-class text classification model with HuggingFace and Simple Transformers. | Currently, we have 7.5 billion people living on the world in around 200 nations. Only
[1.2 billion people of them are native English speakers](https://en.wikipedia.org/wiki/List_of_countries_by_English-speaking_population).
This leads to a lot of unstructured non-English textual data.
Most of the tutorials and blog posts demonstrate how to build text classification, sentiment analysis,
question-answering, or text generation models with BERT based architectures in English. In order to overcome this
missing, I am going to show you how to build a non-English multi-class text classification model.
![native-english-map](/static/blog/bert-text-classification-in-a-different-language/map.png)
Opening my article let me guess it’s safe to assume that you have heard of BERT. If you haven’t, or if you’d like a
refresh, I recommend reading this [paper](https://arxiv.org/pdf/1810.04805.pdf).
In deep learning, there are currently two options for how to build language models. You can build either monolingual
models or multilingual models.
> "multilingual, or not multilingual, that is the question" - as Shakespeare would have said
Multilingual models describe machine learning models that can understand different languages. An example of a
multilingual model is [mBERT](https://storage.googleapis.com/bert_models/2018_11_23/multi_cased_L-12_H-768_A-12.zip)
from Google research.
[This model supports and understands 104 languages.](https://github.com/google-research/bert/blob/master/multilingual.md)
Monolingual models, as the name suggest can understand one language.
Multilingual models are already achieving good results on certain tasks. But these models are bigger, need more data,
and also more time to be trained. These properties lead to higher costs due to the larger amount of data and time
resources needed.
Due to this fact, I am going to show you how to train a monolingual non-English BERT-based multi-class text
classification model. Wow, that was a long sentence!
![meme](/static/blog/bert-text-classification-in-a-different-language/meme.png)
---
## Tutorial
We are going to use [Simple Transformers](https://github.com/ThilinaRajapakse/simpletransformers) - an NLP library based
on the [Transformers](https://github.com/huggingface/transformers) library by HuggingFace. Simple Transformers allows us
to fine-tune Transformer models in a few lines of code.
As the dataset, we are going to use the [Germeval 2019](https://projects.fzai.h-da.de/iggsa/projekt/), which consists of
German tweets. We are going to detect and classify abusive language tweets. These tweets are categorized in 4 classes:
`PROFANITY`, `INSULT`, `ABUSE`, and `OTHERS`. The highest score achieved on this dataset is `0.7361`.
### We are going to:
- install Simple Transformers library
- select a pre-trained monolingual model
- load the dataset
- train/fine-tune our model
- evaluate the results of training
- save the trained model
- load the model and predict a real example
I am using Google Colab with a GPU runtime for this tutorial. If you are not sure how to use a GPU Runtime take a look
[here](https://www.philschmid.de/google-colab-the-free-gpu-tpu-jupyter-notebook-service).
---
## Install Simple Transformers library
First, we install `simpletransformers` with pip. If you are not using Google colab you can check out the installation
guide [here](https://github.com/ThilinaRajapakse/simpletransformers).
```python
# install simpletransformers
!pip install simpletransformers
# check installed version
!pip freeze | grep simpletransformers
# simpletransformers==0.28.2
```
---
## Select a pre-trained monolingual model
Next, we select the pre-trained model. As mentioned above the Simple Transformers library is based on the Transformers
library from HuggingFace. This enables us to use every pre-trained model provided in the
[Transformers library](https://huggingface.co/transformers/pretrained_models.html) and all community-uploaded models.
For a list that includes all community-uploaded models, I refer to
[https://huggingface.co/models](https://huggingface.co/models).
We are going to use the `distilbert-base-german-cased` model, a
[smaller, faster, cheaper version of BERT](https://huggingface.co/transformers/model_doc/distilbert.html). It uses 40%
less parameters than `bert-base-uncased` and runs 60% faster while still preserving over 95% of Bert’s performance.
---
## Load the dataset
The dataset is stored in two text files we can retrieve from the
[competition page](https://projects.fzai.h-da.de/iggsa/). One option to download them is using 2 simple `wget` CLI
commands.
```python
!wget https://projects.fzai.h-da.de/iggsa/wp-content/uploads/2019/08/germeval2019GoldLabelsSubtask1_2.txt
!wget https://projects.fzai.h-da.de/iggsa/wp-content/uploads/2019/09/germeval2019.training_subtask1_2_korrigiert.txt
```
Afterward, we use some `pandas` magic to create a dataframe.
```python
import pandas as pd
class_list = ['INSULT','ABUSE','PROFANITY','OTHER']
df1 = pd.read_csv('germeval2019GoldLabelsSubtask1_2.txt',sep='\t', lineterminator='\n',encoding='utf8',names=["tweet", "task1", "task2"])
df2 = pd.read_csv('germeval2019.training_subtask1_2_korrigiert.txt',sep='\t', lineterminator='\n',encoding='utf8',names=["tweet", "task1", "task2"])
df = pd.concat([df1,df2])
df['task2'] = df['task2'].str.replace('\r', "")
df['pred_class'] = df.apply(lambda x: class_list.index(x['task2']),axis=1)
df = df[['tweet','pred_class']]
print(df.shape)
df.head()
```
Since we don't have a test dataset, we split our dataset — `train_df` and `test_df`. We use 90% of the data for training
(`train_df`) and 10% for testing (`test_df`).
```python
from sklearn.model_selection import train_test_split
train_df, test_df = train_test_split(df, test_size=0.10)
print('train shape: ',train_df.shape)
print('test shape: ',test_df.shape)
# train shape: (6309, 2)
# test shape: (702, 2)
```
---
## Load pre-trained model
The next step is to load the pre-trained model. We do this by creating a `ClassificationModel` instance called `model`.
This instance takes the parameters of:
- the architecture (in our case `"bert"`)
- the pre-trained model (`"distilbert-base-german-cased"`)
- the number of class labels (`4`)
- and our hyperparameter for training (`train_args`).
You can configure the hyperparameter mwithin a wide range of possibilities. For a detailed description of each
attribute, please refer to the
[documentation](https://simpletransformers.ai/docs/usage/#configuring-a-simple-transformers-model).
```python
from simpletransformers.classification import ClassificationModel
# define hyperparameter
train_args ={"reprocess_input_data": True,
"fp16":False,
"num_train_epochs": 4}
# Create a ClassificationModel
model = ClassificationModel(
"bert", "distilbert-base-german-cased",
num_labels=4,
args=train_args
)
```
---
## Train/fine-tune our model
To train our model we only need to run `model.train_model()` and specify which dataset to train on.
```python
model.train_model(train_df)
```
---
## Evaluate the results of training
After we trained our model successfully we can evaluate it. Therefore we create a simple helper function
`f1_multiclass()`, which is used to calculate the `f1_score`. The `f1_score` is a measure for model accuracy. More on
that [here](https://en.wikipedia.org/wiki/F1_score).
```python
from sklearn.metrics import f1_score, accuracy_score
def f1_multiclass(labels, preds):
return f1_score(labels, preds, average='micro')
result, model_outputs, wrong_predictions = model.eval_model(test_df, f1=f1_multiclass, acc=accuracy_score)
# {'acc': 0.6894586894586895,
# 'eval_loss': 0.8673831869594075,
# 'f1': 0.6894586894586895,
# 'mcc': 0.25262380289641617}
```
We achieved an `f1_score` of `0.6895`. Initially, this seems rather low, but keep in mind: the highest submission at
[Germeval 2019](https://projects.fzai.h-da.de/iggsa/submissions/) was `0.7361`. We would have achieved a top 20 rank
without tuning the hyperparameter. This is pretty impressive!
In a future post, I am going to show you how to achieve a higher `f1_score` by tuning the hyperparameters.
---
## Save the trained model
Simple Transformers saves the `model` automatically every `2000` steps and at the end of the training process. The
default directory is `outputs/`. But the `output_dir` is a hyperparameter and can be overwritten. I created a helper
function `pack_model()`, which we use to `pack` all required model files into a `tar.gz`file for deployment.
```python
import os
import tarfile
def pack_model(model_path='',file_name=''):
files = [files for root, dirs, files in os.walk(model_path)][0]
with tarfile.open(file_name+ '.tar.gz', 'w:gz') as f:
for file in files:
f.add(f'{model_path}/{file}')
# run the function
pack_model('output_path','model_name')
```
---
## Load the model and predict a real example
As a final step, we load and predict a real example. Since we packed our files a step earlier with `pack_model()`, we
have to `unpack` them first. Therefore I wrote another helper function `unpack_model()` to unpack our model files.
```python
import os
import tarfile
def unpack_model(model_name=''):
tar = tarfile.open(f"{model_name}.tar.gz", "r:gz")
tar.extractall()
tar.close()
unpack_model('model_name')
```
To load a saved model, we only need to provide the `path` to our saved files and initialize it the same way as we did it
in the training step. _Note: you will need to specify the correct (usually the same used in training) args when loading
the model._
```python
from simpletransformers.classification import ClassificationModel
# define hyperparameter
train_args ={"reprocess_input_data": True,
"fp16":False,
"num_train_epochs": 4}
# Create a ClassificationModel with our trained model
model = ClassificationModel(
"bert", 'path_to_model/',
num_labels=4,
args=train_args
)
```
After initializing it we can use the `model.predict()` function to classify an output with a given input. In this
example, we take a tweet from the Germeval 2018 dataset.
```python
class_list = ['INSULT','ABUSE','PROFANITY','OTHER']
test_tweet1 = "Meine Mutter hat mir erzählt, dass mein Vater einen Wahlkreiskandidaten nicht gewählt hat, weil der gegen die Homo-Ehe ist"
predictions, raw_outputs = model.predict([test_tweet1])
print(class_list[predictions[0]])
# OTHER
test_tweet2 = "Frau #Böttinger meine Meinung dazu ist sie sollten uns mit ihrem Pferdegebiss nicht weiter belästigen #WDR"
predictions, raw_outputs = model.predict([test_tweet2])
print(class_list[predictions[0]])
# INSULT
```
Our model predicted the correct class `OTHER` and `INSULT`.
---
## Resume
Concluding, we can say we achieved our goal to create a non-English BERT-based text classification model.
Our example referred to the German language but can easily be transferred into another language. HuggingFace offers a
lot of pre-trained models for languages like French, Spanish, Italian, Russian, Chinese, ...
---
Thanks for reading. You can find the colab notebook with the complete code
[here](https://colab.research.google.com/drive/1kAlGGGsZaFaFoL0lZ0HK4xUR6QS8gipn#scrollTo=JG2gN7KUqyjY).
If you have any questions, feel free to contact me. |
Semantic Segmantion with Hugging Face's Transformers & Amazon SageMaker | https://www.philschmid.de/image-segmentation-sagemaker | 2022-05-03 | [
"AWS",
"SegFormer",
"Vision",
"Sagemaker"
] | Learn how to do image segmentation with Hugging Face Transformers, SegFormer and Amazon SageMaker. | Transformer models are changing the world of machine learning, starting with natural language processing, and now, with audio and computer vision. Hugging Face's mission is to democratize good machine learning and giving any one the opportunity to use these new state-of-the-art machine learning models.
Together with Amazon SageMaker and AWS we have been working on extending the functionalities of the Hugging Face Inference DLC and the Python SageMaker SDK to make it easier to use speech and vision models together with `transformers`.
You can now use the Hugging Face Inference DLC to do [automatic speech recognition](https://huggingface.co/tasks/automatic-speech-recognition) using MetaAIs [wav2vec2](https://arxiv.org/abs/2006.11477) model or Microsofts [WavLM](https://arxiv.org/abs/2110.13900) or use NVIDIAs [SegFormer](https://arxiv.org/abs/2105.15203) for [image segmentation](https://huggingface.co/tasks/image-segmentation).
This guide will walk you through how to do [Image Segmentation](https://huggingface.co/tasks/image-segmentation) using [segformer](https://huggingface.co/nvidia/segformer-b0-finetuned-ade-512-512) and new `DataSerializer`.
![overview](/static/blog/image-segmentation-sagemaker/semantic_segmentation.png)
In this example you will learn how to:
1. Setup a development Environment and permissions for deploying Amazon SageMaker Inference Endpoints.
2. Deploy a segformer model to Amazon SageMaker for image segmentation
3. Send requests to the endpoint to do image segmentation.
Let's get started! 🚀
---
_If you are going to use Sagemaker in a local environment (not SageMaker Studio or Notebook Instances). You need access to an IAM Role with the required permissions for Sagemaker. You can find [here](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) more about it._
## 1. Setup a development Environment and permissions for deploying Amazon SageMaker Inference Endpoints.
Setting up the development environment and permissions needs to be done for the automatic-speech-recognition example and the semantic-segmentation example. First we update the `sagemaker` SDK to make sure we have new `DataSerializer`.
```python
!pip install sagemaker segmentation-mask-overlay pillow matplotlib --upgrade
import sagemaker
assert sagemaker.__version__ >= "2.86.0"
```
After we have update the SDK we can set the permissions.
_If you are going to use Sagemaker in a local environment (not SageMaker Studio or Notebook Instances). You need access to an IAM Role with the required permissions for Sagemaker. You can find [here](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) more about it._
```python
import sagemaker
import boto3
sess = sagemaker.Session()
# sagemaker session bucket -> used for uploading data, models and logs
# sagemaker will automatically create this bucket if it not exists
sagemaker_session_bucket=None
if sagemaker_session_bucket is None and sess is not None:
# set to default bucket if a bucket name is not given
sagemaker_session_bucket = sess.default_bucket()
try:
role = sagemaker.get_execution_role()
except ValueError:
iam = boto3.client('iam')
role = iam.get_role(RoleName='sagemaker_execution_role')['Role']['Arn']
sess = sagemaker.Session(default_bucket=sagemaker_session_bucket)
print(f"sagemaker role arn: {role}")
print(f"sagemaker bucket: {sess.default_bucket()}")
print(f"sagemaker session region: {sess.boto_region_name}")
```
## 2. Deploy a segformer model to Amazon SageMaker for image segmentation
Image Segmentation divides an image into segments where each pixel in the image is mapped to an object. This task has multiple variants such as instance segmentation, panoptic segmentation and semantic segmentation.
We use the [nvidia/segformer-b0-finetuned-ade-512-512](https://huggingface.co/nvidia/segformer-b0-finetuned-ade-512-512) model running our segmentation endpoint. This model is fine-tuned on ADE20k (scene-centric image) at resolution 512x512.
```python
from sagemaker.huggingface.model import HuggingFaceModel
from sagemaker.serializers import DataSerializer
# Hub Model configuration. <https://huggingface.co/models>
hub = {
'HF_MODEL_ID':'nvidia/segformer-b0-finetuned-ade-512-512',
'HF_TASK':'image-segmentation',
}
# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
env=hub, # configuration for loading model from Hub
role=role, # iam role with permissions to create an Endpoint
transformers_version="4.17", # transformers version used
pytorch_version="1.10", # pytorch version used
py_version='py38', # python version used
)
```
Before we are able to deploy our `HuggingFaceModel` class we need to create a new serializer, which supports our audio data. The Serializer are used in Predictor and in the `predict` method to serializer our data to a specific `mime-type`, which send to the endpoint. The default serialzier for the HuggingFacePredcitor is a JSNON serializer, but since we are not going to send text data to the endpoint we will use the DataSerializer.
```python
# create a serializer for the data
image_serializer = DataSerializer(content_type='image/x-image') # using x-image to support multiple image formats
# deploy model to SageMaker Inference
predictor = huggingface_model.deploy(
initial_instance_count=1, # number of instances
instance_type='ml.g4dn.xlarge', # ec2 instance type
serializer=image_serializer, # serializer for our audio data.
)
```
## 3. Send requests to the endpoint to do image segmentation.
The `.deploy()` returns an `HuggingFacePredictor` object with our `DataSerializer` which can be used to request inference. This `HuggingFacePredictor` makes it easy to send requests to your endpoint and get the results back.
We will use 2 different methods to send requests to the endpoint:
a. Provide a image file via path to the predictor
b. Provide binary image data object to the predictor
### a. Provide a image file via path to the predictor
Using a image file as input is easy as easy as providing the path to its location. The `DataSerializer` will then read it and send the bytes to the endpoint.
We can use a `fixtures_ade20k` sample hosted on huggingface.co
```python
!wget https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/raw/main/ADE_val_00000001.jpg
```
before we send our request lest create a helper function to display our segmentation results.
```python
from PIL import Image
import io
from segmentation_mask_overlay import overlay_masks
import numpy as np
import base64
import matplotlib.pyplot as plt
def stringToRGB(base64_string):
# convert base64 string to numpy array
imgdata = base64.b64decode(str(base64_string))
image = Image.open(io.BytesIO(imgdata))
return np.array(image)
def get_overlay(original_image_path,result):
masks = [stringToRGB(r["mask"]).astype('bool') for r in res]
masks_labels = [r["label"] for r in result]
cmap = plt.cm.tab20(np.arange(len(masks_labels)))
image = Image.open(original_image_path)
overlay_masks(image, masks, labels=masks_labels, colors=cmap, mask_alpha=0.5)
```
To send a request with provide our path to the image file we can use the following code:
```python
image_path = "ADE_val_00000001.jpg"
res = predictor.predict(data=image_path)
print(res[0].keys())
get_overlay(image_path,res)
```
![overlay](/static/blog/image-segmentation-sagemaker/example.png)
### b. Provide binary image data object to the predictor
Instead of providing a path to the image file we can also directy provide the bytes of it reading the file in python.
_make sure `ADE_val_00000001.jpg` is in the directory_
```python
image_path = "ADE_val_00000001.jpg"
with open(image_path, "rb") as data_file:
image_data = data_file.read()
res = predictor.predict(data=image_data)
print(res[0].keys())
get_overlay(image_path,res)
```
![overlay](/static/blog/image-segmentation-sagemaker/example.png)
### Clean up
```python
predictor.delete_model()
predictor.delete_endpoint()
```
## Conclusion
We succesfully managed to deploy SegFormer to Amazon SageMaker for image segmentation. The new `DataSerializer` makes it super easy to work with different `mime-types` than `json`/`txt`, which we are used to from NLP. We can use the `DataSerializer` to send images to the endpoint and get the results back.
With this support we can now build state-of-the-art computer vision systems on Amazon SageMaker with transparent insights on which models are used and how the data is processed. We could even go further and extend the inference part with a custom `inference.py` to include custom post-processing. |
Fine-tune FLAN-T5 for chat & dialogue summarization | https://www.philschmid.de/fine-tune-flan-t5 | 2022-12-27 | [
"T5",
"Summarization",
"HuggingFace",
"Chat"
] | Learn how to fine-tune Google's FLAN-T5 for chat & dialogue summarization using Hugging Face Transformers. | In this blog, you will learn how to fine-tune [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) for chat & dialogue summarization using Hugging Face Transformers. If you already know T5, FLAN-T5 is just better at everything. For the same number of parameters, these models have been fine-tuned on more than 1000 additional tasks covering also more languages.
In this example we will use the [samsum](https://huggingface.co/datasets/samsum) dataset a collection of about 16k messenger-like conversations with summaries. Conversations were created and written down by linguists fluent in English.
You will learn how to:
1. [Setup Development Environment](#1-setup-development-environment)
2. [Load and prepare samsum dataset](#2-load-and-prepare-samsum-dataset)
3. [Fine-tune and evaluate FLAN-T5](#3-fine-tune-and-evaluate-flan-t5)
4. [Run Inference and summarize ChatGPT dialogues](#4-run-inference-and-summarize-chatgpt-dialogues)
Before we can start, make sure you have a [Hugging Face Account](https://huggingface.co/join) to save artifacts and experiments.
## Quick intro: FLAN-T5, just a better T5
FLAN-T5 released with the [Scaling Instruction-Finetuned Language Models](https://arxiv.org/pdf/2210.11416.pdf) paper is an enhanced version of T5 that has been finetuned in a mixture of tasks. The paper explores instruction finetuning with a particular focus on (1) scaling the number of tasks, (2) scaling the model size, and (3) finetuning on chain-of-thought data. The paper discovers that overall instruction finetuning is a general method for improving the performance and usability of pretrained language models.
![flan-t5](/static/blog/fine-tune-flan-t5/flan-t5.png)
- Paper: https://arxiv.org/abs/2210.11416
- Official repo: https://github.com/google-research/t5x
---
Now we know what FLAN-T5 is, let's get started. 🚀
_Note: This tutorial was created and run on a p3.2xlarge AWS EC2 Instance including a NVIDIA V100._
## 1. Setup Development Environment
Our first step is to install the Hugging Face Libraries, including transformers and datasets. Running the following cell will install all the required packages.
```python
!pip install pytesseract transformers datasets rouge-score nltk tensorboard py7zr --upgrade
# install git-fls for pushing model and logs to the hugging face hub
!sudo apt-get install git-lfs --yes
```
This example will use the [Hugging Face Hub](https://huggingface.co/models) as a remote model versioning service. To be able to push our model to the Hub, you need to register on the [Hugging Face](https://huggingface.co/join).
If you already have an account, you can skip this step.
After you have an account, we will use the `notebook_login` util from the `huggingface_hub` package to log into our account and store our token (access key) on the disk.
```python
from huggingface_hub import notebook_login
notebook_login()
```
## 2. Load and prepare samsum dataset
we will use the [samsum](https://huggingface.co/datasets/samsum) dataset a collection of about 16k messenger-like conversations with summaries. Conversations were created and written down by linguists fluent in English.
```json
{
"id": "13818513",
"summary": "Amanda baked cookies and will bring Jerry some tomorrow.",
"dialogue": "Amanda: I baked cookies. Do you want some?\r\nJerry: Sure!\r\nAmanda: I'll bring you tomorrow :-)"
}
```
```python
dataset_id = "samsum"
```
To load the `samsum` dataset, we use the `load_dataset()` method from the 🤗 Datasets library.
```python
from datasets import load_dataset
# Load dataset from the hub
dataset = load_dataset(dataset_id)
print(f"Train dataset size: {len(dataset['train'])}")
print(f"Test dataset size: {len(dataset['test'])}")
# Train dataset size: 14732
# Test dataset size: 819
```
Lets checkout an example of the dataset.
```python
from random import randrange
sample = dataset['train'][randrange(len(dataset["train"]))]
print(f"dialogue: \n{sample['dialogue']}\n---------------")
print(f"summary: \n{sample['summary']}\n---------------")
```
To train our model we need to convert our inputs (text) to token IDs. This is done by a 🤗 Transformers Tokenizer. If you are not sure what this means check out [chapter 6](https://huggingface.co/course/chapter6/1?fw=tf) of the Hugging Face Course.
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
model_id="google/flan-t5-base"
# Load tokenizer of FLAN-t5-base
tokenizer = AutoTokenizer.from_pretrained(model_id)
```
before we can start training we need to preprocess our data. Abstractive Summarization is a text2text-generation task. This means our model will take a text as input and generate a summary as output. For this we want to understand how long our input and output will be to be able to efficiently batch our data.
```python
from datasets import concatenate_datasets
# The maximum total input sequence length after tokenization.
# Sequences longer than this will be truncated, sequences shorter will be padded.
tokenized_inputs = concatenate_datasets([dataset["train"], dataset["test"]]).map(lambda x: tokenizer(x["dialogue"], truncation=True), batched=True, remove_columns=["dialogue", "summary"])
max_source_length = max([len(x) for x in tokenized_inputs["input_ids"]])
print(f"Max source length: {max_source_length}")
# The maximum total sequence length for target text after tokenization.
# Sequences longer than this will be truncated, sequences shorter will be padded."
tokenized_targets = concatenate_datasets([dataset["train"], dataset["test"]]).map(lambda x: tokenizer(x["summary"], truncation=True), batched=True, remove_columns=["dialogue", "summary"])
max_target_length = max([len(x) for x in tokenized_targets["input_ids"]])
print(f"Max target length: {max_target_length}")
```
```python
def preprocess_function(sample,padding="max_length"):
# add prefix to the input for t5
inputs = ["summarize: " + item for item in sample["dialogue"]]
# tokenize inputs
model_inputs = tokenizer(inputs, max_length=max_source_length, padding=padding, truncation=True)
# Tokenize targets with the `text_target` keyword argument
labels = tokenizer(text_target=sample["summary"], max_length=max_target_length, padding=padding, truncation=True)
# If we are padding here, replace all tokenizer.pad_token_id in the labels by -100 when we want to ignore
# padding in the loss.
if padding == "max_length":
labels["input_ids"] = [
[(l if l != tokenizer.pad_token_id else -100) for l in label] for label in labels["input_ids"]
]
model_inputs["labels"] = labels["input_ids"]
return model_inputs
tokenized_dataset = dataset.map(preprocess_function, batched=True, remove_columns=["dialogue", "summary", "id"])
print(f"Keys of tokenized dataset: {list(tokenized_dataset['train'].features)}")
```
## 3. Fine-tune and evaluate FLAN-T5
After we have processed our dataset, we can start training our model. Therefore we first need to load our [FLAN-T5](https://huggingface.co/models?search=flan-t5) from the Hugging Face Hub. In the example we are using a instance with a NVIDIA V100 meaning that we will fine-tune the `base` version of the model.
_I plan to do a follow-up post on how to fine-tune the `xxl` version of the model using Deepspeed._
```python
from transformers import AutoModelForSeq2SeqLM
# huggingface hub model id
model_id="google/flan-t5-base"
# load model from the hub
model = AutoModelForSeq2SeqLM.from_pretrained(model_id)
```
We want to evaluate our model during training. The `Trainer` supports evaluation during training by providing a `compute_metrics`.
The most commonly used metrics to evaluate summarization task is [rogue_score](<https://en.wikipedia.org/wiki/ROUGE_(metric)>) short for Recall-Oriented Understudy for Gisting Evaluation). This metric does not behave like the standard accuracy: it will compare a generated summary against a set of reference summaries
We are going to use `evaluate` library to evaluate the `rogue` score.
```python
import evaluate
import nltk
import numpy as np
from nltk.tokenize import sent_tokenize
nltk.download("punkt")
# Metric
metric = evaluate.load("rouge")
# helper function to postprocess text
def postprocess_text(preds, labels):
preds = [pred.strip() for pred in preds]
labels = [label.strip() for label in labels]
# rougeLSum expects newline after each sentence
preds = ["\n".join(sent_tokenize(pred)) for pred in preds]
labels = ["\n".join(sent_tokenize(label)) for label in labels]
return preds, labels
def compute_metrics(eval_preds):
preds, labels = eval_preds
if isinstance(preds, tuple):
preds = preds[0]
decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True)
# Replace -100 in the labels as we can't decode them.
labels = np.where(labels != -100, labels, tokenizer.pad_token_id)
decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)
# Some simple post-processing
decoded_preds, decoded_labels = postprocess_text(decoded_preds, decoded_labels)
result = metric.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=True)
result = {k: round(v * 100, 4) for k, v in result.items()}
prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in preds]
result["gen_len"] = np.mean(prediction_lens)
return result
```
Before we can start training is to create a `DataCollator` that will take care of padding our inputs and labels. We will use the `DataCollatorForSeq2Seq` from the 🤗 Transformers library.
```python
from transformers import DataCollatorForSeq2Seq
# we want to ignore tokenizer pad token in the loss
label_pad_token_id = -100
# Data collator
data_collator = DataCollatorForSeq2Seq(
tokenizer,
model=model,
label_pad_token_id=label_pad_token_id,
pad_to_multiple_of=8
)
```
The last step is to define the hyperparameters (`TrainingArguments`) we want to use for our training. We are leveraging the [Hugging Face Hub](https://huggingface.co/models) integration of the `Trainer` to automatically push our checkpoints, logs and metrics during training into a repository.
```python
from huggingface_hub import HfFolder
from transformers import Seq2SeqTrainer, Seq2SeqTrainingArguments
# Hugging Face repository id
repository_id = f"{model_id.split('/')[1]}-{dataset_id}"
# Define training args
training_args = Seq2SeqTrainingArguments(
output_dir=repository_id,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
predict_with_generate=True,
fp16=False, # Overflows with fp16
learning_rate=5e-5,
num_train_epochs=5,
# logging & evaluation strategies
logging_dir=f"{repository_id}/logs",
logging_strategy="steps",
logging_steps=500,
evaluation_strategy="epoch",
save_strategy="epoch",
save_total_limit=2,
load_best_model_at_end=True,
# metric_for_best_model="overall_f1",
# push to hub parameters
report_to="tensorboard",
push_to_hub=False,
hub_strategy="every_save",
hub_model_id=repository_id,
hub_token=HfFolder.get_token(),
)
# Create Trainer instance
trainer = Seq2SeqTrainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=tokenized_dataset["train"],
eval_dataset=tokenized_dataset["test"],
compute_metrics=compute_metrics,
)
```
We can start our training by using the `train` method of the `Trainer`.
```python
# Start training
trainer.train()
```
![flan-t5-tensorboard](/static/blog/fine-tune-flan-t5/flan-t5-tensorboard.png)
Nice, we have trained our model. 🎉 Lets run evaluate the best model again on the test set.
```python
trainer.evaluate()
```
The best score we achieved is an `rouge1` score of `47.23`.
Lets save our results and tokenizer to the Hugging Face Hub and create a model card.
```python
# Save our tokenizer and create model card
tokenizer.save_pretrained(repository_id)
trainer.create_model_card()
# Push the results to the hub
trainer.push_to_hub()
```
## 4. Run Inference and summarize ChatGPT dialogues
Now we have a trained model, we can use it to run inference. We will use the `pipeline` API from transformers and a `test` example from our dataset.
```python
from transformers import pipeline
from random import randrange
# load model and tokenizer from huggingface hub with pipeline
summarizer = pipeline("summarization", model="philschmid/flan-t5-base-samsum", device=0)
# select a random test sample
sample = dataset['test'][randrange(len(dataset["test"]))]
print(f"dialogue: \n{sample['dialogue']}\n---------------")
# summarize dialogue
res = summarizer(sample["dialogue"])
print(f"flan-t5-base summary:\n{res[0]['summary_text']}")
```
output
```bash
dialogue:
Abby: Have you talked to Miro?
Dylan: No, not really, I've never had an opportunity
Brandon: me neither, but he seems a nice guy
Brenda: you met him yesterday at the party?
Abby: yes, he's so interesting
Abby: told me the story of his father coming from Albania to the US in the early 1990s
Dylan: really, I had no idea he is Albanian
Abby: he is, he speaks only Albanian with his parents
Dylan: fascinating, where does he come from in Albania?
Abby: from the seacoast
Abby: Duress I believe, he told me they are not from Tirana
Dylan: what else did he tell you?
Abby: That they left kind of illegally
Abby: it was a big mess and extreme poverty everywhere
Abby: then suddenly the border was open and they just left
Abby: people were boarding available ships, whatever, just to get out of there
Abby: he showed me some pictures, like <file_photo>
Dylan: insane
Abby: yes, and his father was among the people
Dylan: scary but interesting
Abby: very!
---------------
flan-t5-base summary:
Abby met Miro yesterday at the party. Miro's father came from Albania to the US in the early 1990s. He speaks Albanian with his parents. The border was open and people were boarding ships to get out of there.
```
---
Thanks for reading! If you have any questions, feel free to contact me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Getting started with Pytorch 2.0 and Hugging Face Transformers | https://www.philschmid.de/getting-started-pytorch-2-0-transformers | 2023-03-16 | [
"Pytorch",
"BERT",
"HuggingFace",
"Optimization"
] | Learn how to get started with Pytorch 2.0 and Hugging Face Transformers and reduce your training time up to 2x. | On December 2, 2022, the PyTorch Team announced [PyTorch 2.0](https://pytorch.org/get-started/pytorch-2.0/) at the PyTorch Conference, focused on better performance, being faster, more pythonic, and staying as dynamic as before.
This blog post explains how to get started with PyTorch 2.0 and Hugging Face Transformers today. It will cover how to fine-tune a BERT model for Text Classification using the newest PyTorch 2.0 features.
You will learn how to:
1. [Setup environment & install Pytorch 2.0](#1-setup-environment--install-pytorch-20)
2. [Load and prepare the dataset](#2-load-and-prepare-the-dataset)
3. [Fine-tune & evaluate BERT model with the Hugging Face `Trainer`](#3-fine-tune--evaluate-bert-model-with-the-hugging-face-trainer)
4. [Run Inference & test model](#4-run-inference--test-model)
Before we can start, make sure you have a [Hugging Face Account](https://huggingface.co/join) to save artifacts and experiments.
## Quick intro: Pytorch 2.0
PyTorch 2.0 or, better, 1.14 is entirely backward compatible. Pytorch 2.0 will not require any modification to existing PyTorch code but can optimize your code by adding a single line of code with `model = torch.compile(model)`.
If you ask yourself, why is there a new major version and no breaking changes? The PyTorch team answered this question in their [FAQ](https://pytorch.org/get-started/pytorch-2.0/#faqs): _“We were releasing substantial new features that we believe change how you meaningfully use PyTorch, so we are calling it 2.0 instead.”_
Those new features include top-level support for TorchDynamo, AOTAutograd, PrimTorch, and TorchInductor.
This allows PyTorch 2.0 to achieve a 1.3x-2x training time speedups supporting [today's 46 model architectures](https://github.com/pytorch/torchdynamo/issues/681) from [HuggingFace Transformers](https://github.com/huggingface/transformers)
If you want to learn more about PyTorch 2.0, check out the official [“GET STARTED”](https://pytorch.org/get-started/pytorch-2.0/).
---
Now we know how PyTorch 2.0 works, let's get started. 🚀
_Note: This tutorial was created and run on a g5.xlarge AWS EC2 Instance, including an NVIDIA A10G GPU._
## 1. Setup environment & install Pytorch 2.0
Our first step is to install PyTorch 2.0 and the Hugging Face Libraries, including `transformers` and `datasets`.
```python
# Install PyTorch 2.0 with cuda 11.7
!pip install "torch>=2.0" --extra-index-url https://download.pytorch.org/whl/cu117 --upgrade --quiet
```
Additionally, we are installing the latest version of `transformers` from the `main` git branch, which includes the native integration of PyTorch 2.0 into the `Trainer`.
```python
# Install transformers and dataset
!pip install "transformers==4.27.1" "datasets==2.9.0" "accelerate==0.17.1" "evaluate==0.4.0" tensorboard scikit-learn
# Install git-fls for pushing model and logs to the hugging face hub
!sudo apt-get install git-lfs --yes
```
This example will use the [Hugging Face Hub](https://huggingface.co/models) as a remote model versioning service. To push our model to the Hub, you must register on the [Hugging Face](https://huggingface.co/join). If you already have an account, you can skip this step. After you have an account, we will use the `login` util from the `huggingface_hub` package to log into our account and store our token (access key) on the disk.
```python
from huggingface_hub import login
login(
token="", # ADD YOUR TOKEN HERE
add_to_git_credential=True
)
```
## 2. Load and prepare the dataset
To keep the example straightforward, we are training a Text Classification model on the [BANKING77](https://huggingface.co/datasets/banking77) dataset. The BANKING77 dataset provides a fine-grained set of intents (classes) in a banking/finance domain. It comprises 13,083 customer service queries labeled with 77 intents. It focuses on fine-grained single-domain intent detection.
We will use the `load_dataset()` method from the [🤗 Datasets](https://huggingface.co/docs/datasets/index) library to load the `banking77`
```python
from datasets import load_dataset
# Dataset id from huggingface.co/dataset
dataset_id = "banking77"
# Load raw dataset
raw_dataset = load_dataset(dataset_id)
print(f"Train dataset size: {len(raw_dataset['train'])}")
print(f"Test dataset size: {len(raw_dataset['test'])}")
```
Let’s check out an example of the dataset.
```python
from random import randrange
random_id = randrange(len(raw_dataset['train']))
raw_dataset['train'][random_id]
# {'text': "I can't get google pay to work right.", 'label': 2}
```
To train our model, we need to convert our "Natural Language" to token IDs. This is done by a Tokenizer, which tokenizes the inputs (including converting the tokens to their corresponding IDs in the pre-trained vocabulary) if you want to learn more about this, out [chapter 6](https://huggingface.co/course/chapter6/1?fw=pt) of the [Hugging Face Course](https://huggingface.co/course/chapter1/1).
```python
from transformers import AutoTokenizer
# Model id to load the tokenizer
model_id = "bert-base-uncased"
# Load Tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id)
# Tokenize helper function
def tokenize(batch):
return tokenizer(batch['text'], padding='max_length', truncation=True, return_tensors="pt")
# Tokenize dataset
raw_dataset = raw_dataset.rename_column("label", "labels") # to match Trainer
tokenized_dataset = raw_dataset.map(tokenize, batched=True,remove_columns=["text"])
print(tokenized_dataset["train"].features.keys())
# dict_keys(['input_ids', 'token_type_ids', 'attention_mask','lable'])
```
## 3. Fine-tune & evaluate BERT model with the Hugging Face `Trainer`
After we have processed our dataset, we can start training our model. We will use the bert-base-uncased model. The first step is to load our model with `AutoModelForSequenceClassification` class from the [Hugging Face Hub](https://huggingface.co/bert-base-uncased). This will initialize the pre-trained BERT weights with a classification head on top. Here we pass the number of classes (77) from our dataset and the label names to have readable outputs for inference.
```python
from transformers import AutoModelForSequenceClassification
# Model id to load the tokenizer
model_id = "bert-base-uncased"
# Prepare model labels - useful for inference
labels = tokenized_dataset["train"].features["labels"].names
num_labels = len(labels)
label2id, id2label = dict(), dict()
for i, label in enumerate(labels):
label2id[label] = str(i)
id2label[str(i)] = label
# Download the model from huggingface.co/models
model = AutoModelForSequenceClassification.from_pretrained(
model_id, num_labels=num_labels, label2id=label2id, id2label=id2label
)
```
We evaluate our model during training. The `Trainer` supports evaluation during training by providing a `compute_metrics` method. We use the `evaluate` library to calculate the [f1 metric](https://huggingface.co/spaces/evaluate-metric/f1) during training on our test split.
```python
import evaluate
import numpy as np
# Metric Id
metric = evaluate.load("f1")
# Metric helper method
def compute_metrics(eval_pred):
predictions, labels = eval_pred
predictions = np.argmax(predictions, axis=1)
return metric.compute(predictions=predictions, references=labels, average="weighted")
```
The last step is to define the hyperparameters (`TrainingArguments`) we use for our training. Here we are adding the PyTorch 2.0 introduced features for fast training times. To use the latest improvements of PyTorch 2.0, we only need to pass the `torch_compile` option in the `TrainingArguments`.
We also leverage the [Hugging Face Hub](https://huggingface.co/models) integration of the `Trainer` to push our checkpoints, logs, and metrics during training into a repository.
```python
from huggingface_hub import HfFolder
from transformers import Trainer, TrainingArguments
# Id for remote repository
repository_id = "bert-base-banking77-pt2"
# Define training args
training_args = TrainingArguments(
output_dir=repository_id,
per_device_train_batch_size=16,
per_device_eval_batch_size=8,
learning_rate=5e-5,
num_train_epochs=3,
# PyTorch 2.0 specifics
bf16=True, # bfloat16 training
torch_compile=True, # optimizations
optim="adamw_torch_fused", # improved optimizer
# logging & evaluation strategies
logging_dir=f"{repository_id}/logs",
logging_strategy="steps",
logging_steps=200,
evaluation_strategy="epoch",
save_strategy="epoch",
save_total_limit=2,
load_best_model_at_end=True,
metric_for_best_model="f1",
# push to hub parameters
report_to="tensorboard",
push_to_hub=True,
hub_strategy="every_save",
hub_model_id=repository_id,
hub_token=HfFolder.get_token(),
)
# Create a Trainer instance
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_dataset["train"],
eval_dataset=tokenized_dataset["test"],
compute_metrics=compute_metrics,
)
```
We can start our training by using the `train` method of the `Trainer`.
```python
# Start training
trainer.train()
```
![tensorboard](/static/blog/getting-started-pytorch-2-0-transformers/tensorboard.png)
Using Pytorch 2.0 and supported features in `transformers` allows us train our BERT model on `10_000` samples within `457.7964` seconds.
We also ran the training without the `torch_compile` option to compare the training times. The training without `torch_compile` took 457 seconds, had a `train_samples_per_second` value of 65.55 and an `f1` score of `0.931`.
```bash
{'train_runtime': 696.2701, 'train_samples_per_second': 43.1, 'eval_f1': 0.928788}
```
By using the `torch_compile` option and the `adamw_torch_fused` optimized , we can see that the training time is reduced by 52.5% compared to the training without PyTorch 2.0.
```bash
{'train_runtime': 457.7964, 'train_samples_per_second': 65.55, 'eval_f1': 0.931773}
```
Our absoulte training time went down from 696s to 457. The `train_samples_per_second` value increased from 43 to 65. The `f1` score is the same/slighty better than the training without `torch_compile`.
Pytorch 2.0 is incredible powerful! 🚀
Lets save our results and tokenizer to the Hugging Face Hub and create a model card.
```python
# Save processor and create model card
tokenizer.save_pretrained(repository_id)
trainer.create_model_card()
trainer.push_to_hub()
```
## 4. Run Inference & test model
To wrap up this tutorial, we will run inference on a few examples and test our model. We will use the `pipeline` method from the `transformers` library to run inference on our model.
```python
from transformers import pipeline
# load model from huggingface.co/models using our repository id
classifier = pipeline("sentiment-analysis", model=repository_id, tokenizer=repository_id, device=0)
sample = "I have been waiting longer than expected for my bank card, could you provide information on when it will arrive?"
pred = classifier(sample)
print(pred)
# [{'label': 'card_arrival', 'score': 0.9903606176376343}]
```
## Conclusion
In this tutorial, we learned how to use PyTorch 2.0 to train a text classification model on the BANKING77 dataset. We saw that PyTorch 2.0 is a powerful tool to speed up your training times. In our example running on a NVIDIA A10G we managed to achieve 52.5% better performance. The Hugging Face Trainer allows you to easily integrate PyTorch 2.0 into your training pipeline by simply adding the `torch_compile` option to the `TrainingArguments`. We can further benefit from PyTorch 2.0 by using the new fused AdamW optimizer when bf16 is available.
Additionally, I want to mentioned that we reduced the training time by 52%, which could be interpreted in a cost saving of 52% for the training or in 52% faster iterations cycles and time to production. You should be able to see even better improvements by using A100 GPUs or by reducing the "Trainer" overhead, e.g. removing evaluation and logging.
PyTorch 2.0 is now officially launched and we are excited to see what the future brings. 🚀
---
Thanks for reading! If you have any questions, feel free to contact me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Deploy T5 11B for inference for less than $500 | https://www.philschmid.de/deploy-t5-11b | 2022-10-25 | [
"HuggingFace",
"Transformers",
"Endpoints",
"bnb"
] | Learn how to deploy T5 11B on a single GPU using Hugging Face Inference Endpoints. | This blog will teach you how to deploy [T5 11B](https://huggingface.co/t5-11b) for inference using [Hugging Face Inference Endpoints](https://huggingface.co/inference-endpoints). The T5 model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) paper and is one of the most used and known Transformer models today.
T5 is an encoder-decoder model pre-trained on a multi-task mixture of unsupervised and supervised tasks and for which each task is converted into a text-to-text format. T5 works well on various tasks out-of-the-box by prepending a different prefix to the input corresponding to each task, e.g., for translation: _`translate English to German: …`_, for summarization: _`summarize: ...`_
![t5.png](/static/blog/deploy-t5-11b/t5.png)
Before we can get started, make sure you meet all of the following requirements:
1. An Organization/User with an active plan and _WRITE_ access to the model repository.
2. You can access the UI: [https://ui.endpoints.huggingface.co](https://ui.endpoints.huggingface.co/endpoints)
The Tutorial will cover how to:
1. [Prepare model repository, custom handler, and additional dependencies](#1-prepare-model-repository-custom-handler-and-additional-dependencies)
2. [Deploy the custom handler as an Inference Endpoint](#2-deploy-the-custom-handler-as-an-inference-endpoint)
3. [Send HTTP request using Python](#3-send-http-request-using-python)
## What is Hugging Face Inference Endpoints?
[🤗 Inference Endpoints](https://huggingface.co/inference-endpoints) offers a secure production solution to easily deploy Machine Learning models on dedicated and autoscaling infrastructure managed by Hugging Face.
A Hugging Face Inference Endpoint is built from a [Hugging Face Model Repository](https://huggingface.co/models). It supports all the [Transformers and Sentence-Transformers tasks](https://huggingface.co/docs/inference-endpoints/supported_tasks) and any arbitrary ML Framework through easy customization by adding a [custom inference handler.](https://huggingface.co/docs/inference-endpoints/guides/custom_handler) This [custom inference handler](https://huggingface.co/docs/inference-endpoints/guides/custom_handler) can be used to implement simple inference pipelines for ML Frameworks like Keras, Tensorflow, and scit-kit learn or can be used to add custom business logic to your existing transformers pipeline.
## Tutorial: Deploy T5-11B on a single NVIDIA T4
In this tutorial, you will learn how to deploy [T5 11B](https://huggingface.co/t5-11b) for inference using [Hugging Face Inference Endpoints](https://huggingface.co/inference-endpoints) and how you can integrate it via an API into your products.
## 1. Prepare model repository, custom handler, and additional dependencies
[T5 11B](https://huggingface.co/t5-11b) is, with 11 billion parameters of the largest openly available Transformer models. The weights in float32 are 45.2GB and are normally too big to deploy on an NVIDIA T4 with 16GB of GPU memory.
To be able to fit T5-11b into a single GPU, we are going to use two techniques:
- **mixed precision and sharding:** Converting the weights to fp16 will reduce the memory footprint by 2x, and sharding will allow us to easily place each “shard” on a GPU without the need to load the model into CPU memory first.
- **LLM.int8():** introduces a new quantization technique for Int8 matrix multiplication, which cuts the memory needed for inference by half while. To learn more about check out this [blog post](https://huggingface.co/blog/hf-bitsandbytes-integration) or the [paper](https://arxiv.org/abs/2208.07339).
We already prepared a repository with sharded fp16 weights of `T5-11B` on the Hugging Face Hub at: [philschmid/t5-11b-sharded](https://huggingface.co/philschmid/t5-11b-sharded). Those weights were created using the following snippet.
_Note: If you want to convert the weights yourself, e.g. to deploy [google/flan-t5-xxl](https://huggingface.co/google/flan-t5-xxl) you need at least 80GB of memory._
```python
import torch
from transformers import AutoModelWithLMHead
from huggingface_hub import HfApi
# load model as float16
model = AutoModelWithLMHead.from_pretrained("t5-11b", torch_dtype=torch.float16, low_cpu_mem_usage=True)
# shard model an push to hub
model.save_pretrained("sharded", max_shard_size="2000MB")
# push to hub
api = HfApi()
api.upload_folder(
folder_path="sharded",
repo_id="philschmid/t5-11b-sharded-fp16",
)
```
After we have our sharded fp16 model weights, we can prepare the additional dependencies we will need to use the \***\*LLM.int8().\*\*** LLM.int8() has been natively integrated into `transformers` through [bitsandbytes](https://github.com/TimDettmers/bitsandbytes).
To [add custom dependencies](https://huggingface.co/docs/inference-endpoints/guides/custom_dependencies), we need to add a **`requirements.txt`** file to your model repository on the Hugging Face Hub with the Python dependencies you want to install.
```python
accelerate==0.13.2
bitsandbytes
```
The last step before creating our Inference Endpoint is to [create a custom Inference Handler](https://huggingface.co/docs/inference-endpoints/guides/custom_handler). If you want to learn how to create a custom Handler for Inference Endpoints, you can either checkout the [documentation](https://huggingface.co/docs/inference-endpoints/guides/custom_handler) or go through [“Custom Inference with Hugging Face Inference Endpoints”](https://www.philschmid.de/custom-inference-handler).
```python
from typing import Dict, List, Any
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
import torch
class EndpointHandler:
def __init__(self, path=""):
# load model and processor from path
self.model = AutoModelForSeq2SeqLM.from_pretrained(path, device_map="auto", load_in_8bit=True)
self.tokenizer = AutoTokenizer.from_pretrained(path)
def __call__(self, data: Dict[str, Any]) -> Dict[str, str]:
"""
Args:
data (:obj:):
includes the deserialized image file as PIL.Image
"""
# process input
inputs = data.pop("inputs", data)
parameters = data.pop("parameters", None)
# preprocess
input_ids = self.tokenizer(inputs, return_tensors="pt").input_ids
# pass inputs with all kwargs in data
if parameters is not None:
outputs = self.model.generate(input_ids, **parameters)
else:
outputs = self.model.generate(input_ids)
# postprocess the prediction
prediction = self.tokenizer.decode(outputs[0], skip_special_tokens=True)
return [{"generated_text": prediction}]
```
## 2. Deploy the custom handler as an Inference Endpoint
UI: [https://ui.endpoints.huggingface.co/](https://ui.endpoints.huggingface.co/)
Since we prepared our model weights, dependencies and custom handler we can now deploy our model as an Inference Endpoint. We can deploy our custom Custom Handler the same way as a regular Inference Endpoint.
![model id](/static/blog/deploy-t5-11b/model.png)
Select the repository, the cloud, and the region. After that we need to open the “Advanced Settings” to select `GPU • small • 1x NIVIDA Tesla T4` .
_Note: If you are trying to deploy the model on CPU the creation will fail_
![model id](/static/blog/deploy-t5-11b/instance.png)
The Inference Endpoint Service will check during the creation of your Endpoint if there is a `handler.py` available and will use it for serving requests no matter which “Task” you select.
The deployment can take 20-40 minutes due to the image artifact's model size (~30GB) build. After deploying our endpoint, we can test it using the inference widget.
![model id](/static/blog/deploy-t5-11b/inference.png)
## 3. Send HTTP request using Python
Hugging Face Inference endpoints can be used with an HTTP client in any language. We will use Python and the `requests` library to send our requests. (make your you have it installed `pip install requests`)
```python
import json
import requests as r
ENDPOINT_URL=""# url of your endpoint
HF_TOKEN=""
# payload samples
regular_payload = { "inputs": "translate English to German: The weather is nice today." }
parameter_payload = {
"inputs": "translate English to German: Hello my name is Philipp and I am a Technical Leader at Hugging Face",
"parameters" : {
"max_length": 40,
}
}
# HTTP headers for authorization
headers= {
"Authorization": f"Bearer {HF_TOKEN}",
"Content-Type": "application/json"
}
# send request
response = r.post(ENDPOINT_URL, headers=headers, json=paramter_payload)
generated_text = response.json()
print(generated_text)
```
## Conclusion
That's it we successfully deploy our `T5-11b` to Hugging Face Inference Endpoints for less than $500.
To underline this again, we deployed one of the biggest available transformers in a managed, secure, scalable inference endpoint. This will allow Data scientists and Machine Learning Engineers to focus on R&D, improving the model rather than fiddling with MLOps topics.
Now, it's your turn! [Sign up](https://ui.endpoints.huggingface.co/new) and create your custom handler within a few minutes!
---
Thanks for reading! If you have any questions, feel free to contact me, through [Github](https://github.com/huggingface/transformers), or on the [forum](https://discuss.huggingface.co/c/optimum/59). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Scalable, Secure Hugging Face Transformer Endpoints with Amazon SageMaker, AWS Lambda, and CDK | https://www.philschmid.de/huggingface-transformers-cdk-sagemaker-lambda | 2021-10-06 | [
"AWS",
"BERT",
"HuggingFace",
"Sagemaker"
] | Deploy Hugging Face Transformers to Amazon SageMaker and create an API for the Endpoint using AWS Lambda, API Gateway and AWS CDK. | Researchers, Data Scientists, Machine Learning Engineers are excellent at creating models to achieve new state-of-the-art performance on different tasks, but deploying those models in an accessible, scalable, and secure way is more of an art than science. Commonly, those skills are found in software engineering and DevOps. [Venturebeat](https://venturebeat.com/2019/07/19/why-do-87-of-data-science-projects-never-make-it-into-production/) reports that 87% of data science projects never make it to production, while [redapt](https://www.redapt.com/blog/why-90-of-machine-learning-models-never-make-it-to-production#:~:text=During%20a%20panel%20at%20last,actually%20make%20it%20into%20production.) claims it to be 90%.
We partnered up with AWS and the Amazon SageMaker team to reduce those numbers. Together we built 🤗 Transformers optimized Deep Learning Containers to accelerate and secure training and deployment of Transformers-based models. If you want to know more about the collaboration, take a look [here](https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face).
In this blog, we are going to use the [AWS Cloud Development Kit (AWS CDK)](https://aws.amazon.com/cdk/?nc1=h_ls) to create our infrastructure and automatically deploy our model from the [Hugging Face Hub](https://huggingface.co/models) to the AWS Cloud. The AWS CDK uses the expressive of modern programming languages, like `Python` to model and deploy your applications as code. In our example, we are going to build an application using the Hugging Face Inference DLC for model serving and Amazon [API Gateway](https://aws.amazon.com/de/api-gateway/) with [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) for building a secure accessible API. The AWS Lambda will be used as a client proxy to interact with our SageMaker Endpoint.
![architecture](/static/blog/huggingface-transformers-cdk-sagemaker-lambda/architecture.png)
If you’re not familiar with Amazon SageMaker: _“Amazon SageMaker is a fully managed service that provides every developer and data scientist with the ability to build, train, and deploy machine learning (ML) models quickly. SageMaker removes the heavy lifting from each step of the machine learning process to make it easier to develop high-quality models.”_ [[REF]](https://aws.amazon.com/sagemaker/faqs/)
You find the complete code for it in this [Github repository](https://github.com/philschmid/cdk-samples/tree/master/aws-lambda-sagemaker-endpoint-huggingface).
---
## Tutorial
Before we get started, make sure you have the [AWS CDK installed](https://docs.aws.amazon.com/cdk/latest/guide/getting_started.html#getting_started_install) and [configured your AWS credentials](https://docs.aws.amazon.com/cdk/latest/guide/getting_started.html#getting_started_prerequisites).
**What are we going to do:**
- selecting a model from the [Hugging Face Hub](https://huggingface.co/models)
- bootstrap our CDK project
- Deploy the model using CDK
- Run inference and test the API
### 1. Selecting a model from the [Hugging Face Hub](https://huggingface.co/models)
For those of you who don't what the Hugging Face Hub is you should definitely take a look [here](https://huggingface.co/docs/hub/main). But the TL;DR; is that the Hugging Face Hub is an open community-driven collection of state-of-the-art models. At the time of writing the blog post, we have 17,501 available free models to use.
To select the model we want to use we navigate to [hf.co/models](http://hf.co/models) then pre-filter using the task on the left, e.g. `summarization`. For this blog post, I went with the [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6), which was fine-tuned on CNN articles for summarization.
![Hugging Face Hub](/static/blog/huggingface-transformers-cdk-sagemaker-lambda/hub.png)
### 2. Bootstrap our CDK project
Deploying applications using the CDK may require additional resources for CDK to store for example assets. The process of provisioning these initial resources is called [bootstrapping](https://docs.aws.amazon.com/cdk/latest/guide/bootstrapping.html). So before being able to deploy our application, we need to make sure that we bootstrapped our project.
```json
cdk bootstrap
```
### 3. Deploy the model using CDK
Now we are able to deploy our application with the whole infrastructure and deploy our previous selected Transformer `sshleifer/distilbart-cnn-12-6` to Amazon SageMaker. Our application uses the [CDK context](https://docs.aws.amazon.com/cdk/latest/guide/context.html) to accept dynamic parameters for the deployment. We can provide our model with as key `model` and our task as key `task` . The application allows further configuration, like passing a different `instance_type` when deploying. You can find the whole list of arguments in the [repository](https://github.com/philschmid/cdk-samples/tree/master/aws-lambda-sagemaker-endpoint-huggingface#context).
In our case we will provide `model=sshleifer/distilbart-cnn-12-6` and `task=summarization` with the a GPU instance `instance_type=ml.g4dn.xlarge`.
```json
cdk deploy \
-c model="sshleifer/distilbart-cnn-12-6" \
-c task="summarization" \
-c instance_type="ml.g4dn.xlarge"
```
After running the `cdk deploy` command we will get an output of all resources, which are going to be created. We then confirm our deployment and the CDK will create all required resources, deploy our AWS Lambda function and our Model to Amazon SageMaker. This takes around 3-5 minutes.
After the deployment the console output should look similar to this.
```json
✅ HuggingfaceSagemakerEndpoint
Outputs:
HuggingfaceSagemakerEndpoint.hfapigwEndpointE75D67B4 = https://r7rch77fhj.execute-api.us-east-1.amazonaws.com/prod/
Stack ARN:
arn:aws:cloudformation:us-east-1:558105141721:stack/HuggingfaceSagemakerEndpoint/6eab9e10-269b-11ec-86cc-0af6d09e2aab
```
### 4. Run inference and test the API
After the deployment is successfully complete we can grap our Endpoint URL `HuggingfaceSagemakerEndpoint.hfapigwEndpointE75D67B4` from the CLI output and use any REST-client to test it.
![insomnia request](/static/blog/huggingface-transformers-cdk-sagemaker-lambda/request.png)
the request as curl to copy
```json
curl --request POST \
--url https://r7rch77fhj.execute-api.us-east-1.amazonaws.com/prod/ \
--header 'Content-Type: application/json' \
--data '{
"inputs": "Hugging Face, the winner of VentureBeat’s Innovation in Natural Language Process/Understanding Award for 2021, is looking to level the playing field. The team, launched by Clément Delangue and Julien Chaumond in 2016, was recognized for its work in democratizing NLP, the global market value for which is expected to hit $35.1 billion by 2026. This week, Google’s former head of Ethical AI Margaret Mitchell joined the team. Hugging Face is also knee-deep in a project called BigScience, an international, multi-company, multi-university research project with over 500 researchers, designed to better understand and improve results on large language models."
}
'
```
## Conclusion
With the help of the AWS CDK we were able to deploy all required Infrastructure for our API by defining them in a programmatically familiar language we know and use. The Hugging Face Inference DLC allowed us to deploy a model from the [Hugging Face Hub](https://huggingface.co/), with out writing a single line of inference code and we are now able to use our public exposed API in securely in any applications, service or frontend we want.
To optimize the solution you can tweek the CDK template to your needs, e.g. add a VPC to the AWS Lambda and the SageMaker Endpoint to accelerate communication between those two.
---
You can find the code [here](https://github.com/philschmid/cdk-samples/tree/master/aws-lambda-sagemaker-endpoint-huggingface) and feel free open a thread the [forum](https://discuss.huggingface.co/).
Thanks for reading. If you have any questions, feel free to contact me, through [Github](https://github.com/huggingface/transformers), or on the [forum](https://discuss.huggingface.co/c/sagemaker/17). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Deploy FLAN-UL2 20B on Amazon SageMaker | https://www.philschmid.de/deploy-flan-ul2-sagemaker | 2023-03-20 | [
"GenerativeAI",
"SageMaker",
"HuggingFace",
"Inference"
] | Learn how to deploy Google's FLAN-UL 20B on Amazon SageMaker for inference. | Welcome to this Amazon SageMaker guide on how to deploy the [FLAN-UL2 20B](https://huggingface.co/google/flan-ul2) on Amazon SageMaker for inference. We will deploy [google/flan-ul2](https://huggingface.co/google/flan-ul2) to Amazon SageMaker for real-time inference using Hugging Face Inference Deep Learning Container.
![flan-ul2-on-amazon-sagemaker](/static/blog/deploy-flan-ul2-sagemaker/sagemaker-endpoint.png)
What we are going to do
1. Create FLAN-UL2 20B inference script
2. Create SageMaker `model.tar.gz` artifact
3. Deploy the model to Amazon SageMaker
4. Run inference using the deployed model
## Quick intro: FLAN-UL2, a bigger FLAN-T5
Flan-UL2 is an encoder decoder (seq2seq) model based on the T5 architecture. It uses the same configuration as the UL2 model released earlier last year. It was fine tuned using the "Flan" prompt tuning and dataset collection. FLAN-UL2 was trained as part of the [Scaling Instruction-Finetuned Language Models](https://arxiv.org/pdf/2210.11416.pdf) paper. Noticeable difference to FLAN-T5 XXL are:
- FLAN-UL2 has context window of 2048 compared to 512 for FLAN-T5 XXL
- +~3% better performance than FLAN-T5 XXL on [benchmarks](https://huggingface.co/google/flan-ul2#performance-improvment)
![flan-ul2](/static/blog/deploy-flan-ul2-sagemaker/flan.webp)
- Paper: https://arxiv.org/abs/2210.11416
- Official repo: https://github.com/google-research/t5x
---
Before we can get started we have to install the missing dependencies to be able to create our `model.tar.gz` artifact and create our Amazon SageMaker endpoint.
We also have to make sure we have the permission to create our SageMaker Endpoint.
```python
!pip install "sagemaker>=2.140.0" boto3 "huggingface_hub==0.13.0" "hf-transfer" --upgrade
```
_If you are going to use Sagemaker in a local environment (not SageMaker Studio or Notebook Instances). You need access to an IAM Role with the required permissions for Sagemaker. You can find [here](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) more about it._
```python
import sagemaker
import boto3
sess = sagemaker.Session()
# sagemaker session bucket -> used for uploading data, models and logs
# sagemaker will automatically create this bucket if it not exists
sagemaker_session_bucket=None
if sagemaker_session_bucket is None and sess is not None:
# set to default bucket if a bucket name is not given
sagemaker_session_bucket = sess.default_bucket()
try:
role = sagemaker.get_execution_role()
except ValueError:
iam = boto3.client('iam')
role = iam.get_role(RoleName='sagemaker_execution_role')['Role']['Arn']
sess = sagemaker.Session(default_bucket=sagemaker_session_bucket)
print(f"sagemaker role arn: {role}")
print(f"sagemaker bucket: {sess.default_bucket()}")
print(f"sagemaker session region: {sess.boto_region_name}")
```
## Create FLAN-UL2 20B inference script
Amazon SageMaker allows us to customize the inference script by providing a `inference.py` file. The `inference.py` file is the entry point to our model. It is responsible for loading the model and handling the inference request. If you are used to deploying Hugging Face Transformers that might be new to you. Usually, we just provide the `HF_MODEL_ID` and `HF_TASK` and the Hugging Face DLC takes care of the rest. For `FLAN-UL2` thats not yet possible. We have to provide the `inference.py` file and implement the `model_fn` and `predict_fn` functions to efficiently load the 11B large model.
If you want to learn more about creating a custom inference script you can check out [Creating document embeddings with Hugging Face's Transformers & Amazon SageMaker](https://www.philschmid.de/custom-inference-huggingface-sagemaker)
In addition to the `inference.py` file we also have to provide a `requirements.txt` file. The `requirements.txt` file is used to install the dependencies for our `inference.py` file.
The first step is to create a `code/` directory.
```python
!mkdir code
```
As next we create a `requirements.txt` file and add the `accelerate` to it. The `accelerate` library is used efficiently to load the model on multiple GPUs.
```python
%%writefile code/requirements.txt
accelerate==0.18.0
transformers==4.27.2
```
The last step for our inference handler is to create the `inference.py` file. The `inference.py` file is responsible for loading the model and handling the inference request. The `model_fn` function is called when the model is loaded. The `predict_fn` function is called when we want to do inference.
We are using the `AutoModelForSeq2SeqLM` class from transformers load the model from the local directory (`model_dir`) in the `model_fn`. In the `predict_fn` function we are using the `generate` function from transformers to generate the text for a given input prompt.
```python
%%writefile code/inference.py
from typing import Dict, List, Any
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
import torch
def model_fn(model_dir):
# load model and processor from model_dir
model = AutoModelForSeq2SeqLM.from_pretrained(model_dir,
device_map="auto",
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained(model_dir)
return model, tokenizer
def predict_fn(data, model_and_tokenizer):
# unpack model and tokenizer
model, tokenizer = model_and_tokenizer
# process input
inputs = data.pop("inputs", data)
parameters = data.pop("parameters", None)
# preprocess
input_ids = tokenizer(inputs, return_tensors="pt").input_ids
# pass inputs with all kwargs in data
if parameters is not None:
outputs = model.generate(input_ids, **parameters)
else:
outputs = model.generate(input_ids)
# postprocess the prediction
prediction = tokenizer.decode(outputs[0], skip_special_tokens=True)
return [{"generated_text": prediction}]
```
## Create SageMaker `model.tar.gz` artifact
To use our `inference.py` we need to bundle it together with our model weights into a `model.tar.gz`. The archive includes all our model-artifcats to run inference. The `inference.py` script will be placed into a `code/` folder. We will use the `huggingface_hub` SDK to easily download[google/flan-ul2](https://huggingface.co/google/flan-ul2) from [Hugging Face](https://hf.co/models) and then upload it to Amazon S3 with the `sagemaker` SDK.
Make sure the enviornment has enough diskspace to store the model, ~35GB should be enough.
```python
from distutils.dir_util import copy_tree
from pathlib import Path
import os
# set HF_HUB_ENABLE_HF_TRANSFER env var to enable hf-transfer for faster downloads
os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
from huggingface_hub import snapshot_download
HF_MODEL_ID="google/flan-ul2"
# create model dir
model_tar_dir = Path(HF_MODEL_ID.split("/")[-1])
model_tar_dir.mkdir(exist_ok=True)
# Download model from Hugging Face into model_dir
snapshot_download(HF_MODEL_ID, local_dir=str(model_tar_dir), local_dir_use_symlinks=False)
# copy code/ to model dir
copy_tree("code/", str(model_tar_dir.joinpath("code")))
```
Before we can upload the model to Amazon S3 we have to create a `model.tar.gz` archive. Important is that the archive should directly contain all files and not a folder with the files. For example, your file should look like this:
```
model.tar.gz/
|- config.json
|- pytorch_model-00001-of-00012.bin
|- tokenizer.json
|- ...
```
```python
parent_dir=os.getcwd()
# change to model dir
os.chdir(str(model_tar_dir))
# use pigz for faster and parallel compression
!tar -cf model.tar.gz --use-compress-program=pigz *
# change back to parent dir
os.chdir(parent_dir)
```
After we created the `model.tar.gz` archive we can upload it to Amazon S3. We will use the `sagemaker` SDK to upload the model to our sagemaker session bucket.
```python
from sagemaker.s3 import S3Uploader
# upload model.tar.gz to s3
s3_model_uri = S3Uploader.upload(local_path=str(model_tar_dir.joinpath("model.tar.gz")), desired_s3_uri=f"s3://{sess.default_bucket()}/flan-ul2")
print(f"model uploaded to: {s3_model_uri}")
```
## Deploy the model to Amazon SageMaker
After we have uploaded our model archive we can deploy our model to Amazon SageMaker. We will use `HuggingfaceModel` to create our real-time inference endpoint.
We are going to deploy the model to an `g5.12xlarge` instance. The `g5.12xlarge` instance is a GPU instance with 4x NVIDIA A10G GPU. If you are interested in how you could add autoscaling to your endpoint you can check out [Going Production: Auto-scaling Hugging Face Transformers with Amazon SageMaker](https://www.philschmid.de/auto-scaling-sagemaker-huggingface).
```python
from sagemaker.huggingface.model import HuggingFaceModel
# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
model_data=s3_model_uri, # path to your model and script
role=role, # iam role with permissions to create an Endpoint
transformers_version="4.26", # transformers version used
pytorch_version="1.13", # pytorch version used
py_version='py39', # python version used
model_server_workers=1
)
# deploy the endpoint endpoint
predictor = huggingface_model.deploy(
initial_instance_count=1,
instance_type="ml.g5.12xlarge",
# container_startup_health_check_timeout=600, # increase timeout for large models
# model_data_download_timeout=600, # increase timeout for large models
)
```
## Run inference using the deployed model
The `.deploy()` returns an `HuggingFacePredictor` object which can be used to request inference using the `.predict()` method. Our endpoint expects a `json` with at least `inputs` key.
When using generative models, most of the time you want to configure or customize your prediction to fit your needs, for example by using beam search, configuring the max or min length of the generated sequence, or adjusting the temperature to reduce repetition.
The Transformers library provides different strategies and kwargs to do this, the Hugging Face Inference toolkit offers the same functionality using the parameters attribute of your request payload. Below you can find examples on how to generate text without parameters, with beam search, and using custom configurations. If you want to learn about different decoding strategies check out this [blog post](https://huggingface.co/blog/how-to-generate).
```python
payload = """Summarize the following text:
Peter and Elizabeth took a taxi to attend the night party in the city. While in the party, Elizabeth collapsed and was rushed to the hospital.
Since she was diagnosed with a brain injury, the doctor told Peter to stay besides her until she gets well.
Therefore, Peter stayed with her at the hospital for 3 days without leaving.
"""
parameters = {
"do_sample": True,
"max_new_tokens": 50,
"top_p": 0.95,
}
# Run prediction
predictor.predict({
"inputs": payload,
"parameters" :parameters
})
# [{'generated_text': 'Peter stayed with Elizabeth at the hospital for 3 days.'}]
```
Lets try another examples! This time we focus ond questions answering with a step by step approach including some simple math.
```python
payload = """Answer the following question step by step:
Roger has 5 tennis balls. He buys 2 more cans of tennis balls.
Each can has 3 tennis balls. How many tennis balls does he have now?
"""
parameters = {
"early_stopping": True,
"length_penalty": 2.0,
"max_new_tokens": 50,
"temperature": 0,
}
# Run prediction
predictor.predict({
"inputs": payload,
"parameters" :parameters
})
# [{'generated_text': 'He buys 2 cans of tennis balls, so he has 2 * 3 = 6 tennis balls. He has 5 + 6 = 11 tennis balls now.'}]
```
### Delete model and endpoint
To clean up, we can delete the model and endpoint.
```python
predictor.delete_model()
predictor.delete_endpoint()
```
---
Thanks for reading! If you have any questions, feel free to contact me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Serverless Inference with Hugging Face's Transformers, DistilBERT and Amazon SageMaker | https://www.philschmid.de/sagemaker-serverless-huggingface-distilbert | 2022-04-21 | [
"HuggingFace",
"AWS",
"BERT",
"Serverless"
] | Learn how to deploy a Transformer model like BERT to Amazon SageMaker Serverless using the Python SageMaker SDK. | [Notebook: serverless_inference](https://github.com/huggingface/notebooks/blob/main/sagemaker/19_serverless_inference/sagemaker-notebook.ipynb)
Welcome to this getting started guide, you learn how to use the Hugging Face Inference DLCs and Amazon SageMaker Python SDK to create a [Serverless Inference](https://docs.aws.amazon.com/sagemaker/latest/dg/serverless-endpoints.html) endpoint.
Amazon SageMaker Serverless Inference is a new capability in SageMaker that enables you to deploy and scale ML models in a Serverless fashion. Serverless endpoints automatically launch compute resources and scale them in and out depending on traffic similar to AWS Lambda.
Serverless Inference is ideal for workloads which have idle periods between traffic spurts and can tolerate cold starts. With a pay-per-use model, Serverless Inference is a cost-effective option if you have an infrequent or unpredictable traffic pattern.
You will learn how to:
- [1. Setup development environment and permissions](#1-setup-development-environment-and-permissions)
- [2. Create and Deploy a Serverless Hugging Face Transformers](#2-create-and-deploy-a-serverless-hugging-face-transformers)
- [3. Send requests to Serverless Inference Endpoint](#3-send-requests-to-serverless-inference-endpoint)
Let's get started! 🚀
### How it works
The following diagram shows the workflow of Serverless Inference.
![architecture](/static/blog/sagemaker-serverless-huggingface-distilbert/serverless.png)
When you create a serverless endpoint, SageMaker provisions and manages the compute resources for you. Then, you can make inference requests to the endpoint and receive model predictions in response. SageMaker scales the compute resources up and down as needed to handle your request traffic, and you only pay for what you use.
### Limitations
Memory size: 1024 MB, 2048 MB, 3072 MB, 4096 MB, 5120 MB, or 6144 MB
Concurrent invocations: 50 per region
Cold starts: ms to seconds. Can be monitored with the `ModelSetupTime` Cloudwatch Metric
_NOTE: You can run this demo in Sagemaker Studio, your local machine, or Sagemaker Notebook Instances_
## 1. Setup development environment and permissions
```python
!pip install sagemaker --upgrade
import sagemaker
assert sagemaker.__version__ >= "2.86.0"
```
_If you are going to use Sagemaker in a local environment (not SageMaker Studio or Notebook Instances). You need access to an IAM Role with the required permissions for Sagemaker. You can find [here](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) more about it._
```python
import sagemaker
import boto3
sess = sagemaker.Session()
# sagemaker session bucket -> used for uploading data, models and logs
# sagemaker will automatically create this bucket if it not exists
sagemaker_session_bucket=None
if sagemaker_session_bucket is None and sess is not None:
# set to default bucket if a bucket name is not given
sagemaker_session_bucket = sess.default_bucket()
try:
role = sagemaker.get_execution_role()
except ValueError:
iam = boto3.client('iam')
role = iam.get_role(RoleName='sagemaker_execution_role')['Role']['Arn']
sess = sagemaker.Session(default_bucket=sagemaker_session_bucket)
print(f"sagemaker role arn: {role}")
print(f"sagemaker bucket: {sess.default_bucket()}")
print(f"sagemaker session region: {sess.boto_region_name}")
```
## 2. Create and Deploy a Serverless Hugging Face Transformers
We use the [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) model running our serverless endpoint. This model is a fine-tune checkpoint of [DistilBERT-base-uncased](https://huggingface.co/distilbert-base-uncased), fine-tuned on SST-2. This model reaches an accuracy of 91.3 on the dev set (for comparison, Bert bert-base-uncased version reaches an accuracy of 92.7).
```python
from sagemaker.huggingface.model import HuggingFaceModel
from sagemaker.serverless import ServerlessInferenceConfig
# Hub Model configuration. <https://huggingface.co/models>
hub = {
'HF_MODEL_ID':'distilbert-base-uncased-finetuned-sst-2-english',
'HF_TASK':'text-classification'
}
# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
env=hub, # configuration for loading model from Hub
role=role, # iam role with permissions to create an Endpoint
transformers_version="4.12", # transformers version used
pytorch_version="1.9", # pytorch version used
py_version='py38', # python version used
)
# Specify MemorySizeInMB and MaxConcurrency in the serverless config object
serverless_config = ServerlessInferenceConfig(
memory_size_in_mb=4096, max_concurrency=10,
)
# deploy the endpoint endpoint
predictor = huggingface_model.deploy(
serverless_inference_config=serverless_config
)
```
## 3. Send requests to Serverless Inference Endpoint
The `.deploy()` returns an `HuggingFacePredictor` object which can be used to request inference. This `HuggingFacePredictor` makes it easy to send requests to your endpoint and get the results back.
_The first request might have some coldstart (2-5s)._
```python
data = {
"inputs": "the mesmerizing performances of the leads keep the film grounded and keep the audience riveted .",
}
res = predictor.predict(data=data)
print(res)
```
### Clean up
```python
predictor.delete_model()
predictor.delete_endpoint()
```
## 4. Conclusion
With the help of the Python SageMaker SDK, we were able to deploy an Amazon SageMaker Serverless Inference Endpoint for Hugging Face Transformers with 1 command (`deploy`).
This will help any large or small company get quickly and cost-effective started with Hugging Face Transformers on AWS. The beauty of Serverless computing will make sure that your Data Science or Machine Learning Team is not spending thousands of dollar while implementing a Proof of Conecpt or at the start of a new Product.
After the PoC was successful or Serverless Inference is not performing well or become more expensive, you can easily deploy your model to real-time endpoints with GPUs just by changing 1 line of code.
You should definitely give SageMaker Serverless Inference a try!
---
Thanks for reading! If you have any questions, feel free to contact me, through [Github](https://github.com/huggingface/transformers), or on the [forum](https://discuss.huggingface.co/c/sagemaker/17). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Efficient Large Language Model training with LoRA and Hugging Face | https://www.philschmid.de/fine-tune-flan-t5-peft | 2023-03-23 | [
"GenerativeAI",
"LoRA",
"HuggingFace",
"Training"
] | Learn how to fine-tune Google's FLAN-T5 XXL on a Single GPU using LoRA And Hugging Face Transformers. | In this blog, we are going to show you how to apply [Low-Rank Adaptation of Large Language Models (LoRA)](https://arxiv.org/abs/2106.09685) to fine-tune FLAN-T5 XXL (11 billion parameters) on a single GPU. We are going to leverage Hugging Face [Transformers](https://huggingface.co/docs/transformers/index), [Accelerate](https://huggingface.co/docs/accelerate/index), and [PEFT](https://github.com/huggingface/peft).
You will learn how to:
1. [Setup Development Environment](#1-setup-development-environment)
2. [Load and prepare the dataset](#2-load-and-prepare-the-dataset)
3. [Fine-Tune T5 with LoRA and bnb int-8](#3-fine-tune-t5-with-lora-and-bnb-int-8)
4. [Evaluate & run Inference with LoRA FLAN-T5](#4-evaluate--run-inference-with-lora-flan-t5)
### Quick intro: PEFT or Parameter Efficient Fine-tuning
[PEFT](https://github.com/huggingface/peft), or Parameter Efficient Fine-tuning, is a new open-source library from Hugging Face to enable efficient adaptation of pre-trained language models (PLMs) to various downstream applications without fine-tuning all the model's parameters. PEFT currently includes techniques for:
- LoRA: [LORA: LOW-RANK ADAPTATION OF LARGE LANGUAGE MODELS](https://arxiv.org/pdf/2106.09685.pdf)
- Prefix Tuning: [P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks](https://arxiv.org/pdf/2110.07602.pdf)
- P-Tuning: [GPT Understands, Too](https://arxiv.org/pdf/2103.10385.pdf)
- Prompt Tuning: [The Power of Scale for Parameter-Efficient Prompt Tuning](https://arxiv.org/pdf/2104.08691.pdf)
_Note: This tutorial was created and run on a g5.2xlarge AWS EC2 Instance, including 1 NVIDIA A10G._
## 1. Setup Development Environment
In our example, we use the [PyTorch Deep Learning AMI](https://docs.aws.amazon.com/dlami/latest/devguide/tutorial-pytorch.html) with already set up CUDA drivers and PyTorch installed. We still have to install the Hugging Face Libraries, including transformers and datasets. Running the following cell will install all the required packages.
```python
# install Hugging Face Libraries
!pip install git+https://github.com/huggingface/peft.git
!pip install "transformers==4.27.2" "datasets==2.9.0" "accelerate==0.17.1" "evaluate==0.4.0" "bitsandbytes==0.37.1" loralib --upgrade --quiet
# install additional dependencies needed for training
!pip install rouge-score tensorboard py7zr
```
## 2. Load and prepare the dataset
we will use the [samsum](https://huggingface.co/datasets/samsum) dataset, a collection of about 16k messenger-like conversations with summaries. Conversations were created and written down by linguists fluent in English.
```python
{
"id": "13818513",
"summary": "Amanda baked cookies and will bring Jerry some tomorrow.",
"dialogue": "Amanda: I baked cookies. Do you want some?\r\nJerry: Sure!\r\nAmanda: I'll bring you tomorrow :-)"
}
```
To load the `samsum` dataset, we use the **`load_dataset()`** method from the 🤗 Datasets library.
```python
from datasets import load_dataset
# Load dataset from the hub
dataset = load_dataset("samsum")
print(f"Train dataset size: {len(dataset['train'])}")
print(f"Test dataset size: {len(dataset['test'])}")
# Train dataset size: 14732
# Test dataset size: 819
```
To train our model, we need to convert our inputs (text) to token IDs. This is done by a 🤗 Transformers Tokenizer. If you are not sure what this means, check out **[chapter 6](https://huggingface.co/course/chapter6/1?fw=tf)** of the Hugging Face Course.
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
model_id="google/flan-t5-xxl"
# Load tokenizer of FLAN-t5-XL
tokenizer = AutoTokenizer.from_pretrained(model_id)
```
Before we can start training, we need to preprocess our data. Abstractive Summarization is a text-generation task. Our model will take a text as input and generate a summary as output. We want to understand how long our input and output will take to batch our data efficiently.
```python
from datasets import concatenate_datasets
import numpy as np
# The maximum total input sequence length after tokenization.
# Sequences longer than this will be truncated, sequences shorter will be padded.
tokenized_inputs = concatenate_datasets([dataset["train"], dataset["test"]]).map(lambda x: tokenizer(x["dialogue"], truncation=True), batched=True, remove_columns=["dialogue", "summary"])
input_lenghts = [len(x) for x in tokenized_inputs["input_ids"]]
# take 85 percentile of max length for better utilization
max_source_length = int(np.percentile(input_lenghts, 85))
print(f"Max source length: {max_source_length}")
# The maximum total sequence length for target text after tokenization.
# Sequences longer than this will be truncated, sequences shorter will be padded."
tokenized_targets = concatenate_datasets([dataset["train"], dataset["test"]]).map(lambda x: tokenizer(x["summary"], truncation=True), batched=True, remove_columns=["dialogue", "summary"])
target_lenghts = [len(x) for x in tokenized_targets["input_ids"]]
# take 90 percentile of max length for better utilization
max_target_length = int(np.percentile(target_lenghts, 90))
print(f"Max target length: {max_target_length}")
```
We preprocess our dataset before training and save it to disk. You could run this step on your local machine or a CPU and upload it to the [Hugging Face Hub](https://huggingface.co/docs/hub/datasets-overview).
```python
def preprocess_function(sample,padding="max_length"):
# add prefix to the input for t5
inputs = ["summarize: " + item for item in sample["dialogue"]]
# tokenize inputs
model_inputs = tokenizer(inputs, max_length=max_source_length, padding=padding, truncation=True)
# Tokenize targets with the `text_target` keyword argument
labels = tokenizer(text_target=sample["summary"], max_length=max_target_length, padding=padding, truncation=True)
# If we are padding here, replace all tokenizer.pad_token_id in the labels by -100 when we want to ignore
# padding in the loss.
if padding == "max_length":
labels["input_ids"] = [
[(l if l != tokenizer.pad_token_id else -100) for l in label] for label in labels["input_ids"]
]
model_inputs["labels"] = labels["input_ids"]
return model_inputs
tokenized_dataset = dataset.map(preprocess_function, batched=True, remove_columns=["dialogue", "summary", "id"])
print(f"Keys of tokenized dataset: {list(tokenized_dataset['train'].features)}")
# save datasets to disk for later easy loading
tokenized_dataset["train"].save_to_disk("data/train")
tokenized_dataset["test"].save_to_disk("data/eval")
```
## 3. Fine-Tune T5 with LoRA and bnb int-8
In addition to the LoRA technique, we will use [bitsanbytes LLM.int8()](https://huggingface.co/blog/hf-bitsandbytes-integration) to quantize out frozen LLM to int8. This allows us to reduce the needed memory for FLAN-T5 XXL ~4x.
The first step of our training is to load the model. We are going to use [philschmid/flan-t5-xxl-sharded-fp16](https://huggingface.co/philschmid/flan-t5-xxl-sharded-fp16), which is a sharded version of [google/flan-t5-xxl](https://huggingface.co/google/flan-t5-xxl). The sharding will help us to not run off of memory when loading the model.
```python
from transformers import AutoModelForSeq2SeqLM
# huggingface hub model id
model_id = "philschmid/flan-t5-xxl-sharded-fp16"
# load model from the hub
model = AutoModelForSeq2SeqLM.from_pretrained(model_id, load_in_8bit=True, device_map="auto")
```
Now, we can prepare our model for the LoRA int-8 training using `peft`.
```python
from peft import LoraConfig, get_peft_model, prepare_model_for_int8_training, TaskType
# Define LoRA Config
lora_config = LoraConfig(
r=16,
lora_alpha=32,
target_modules=["q", "v"],
lora_dropout=0.05,
bias="none",
task_type=TaskType.SEQ_2_SEQ_LM
)
# prepare int-8 model for training
model = prepare_model_for_int8_training(model)
# add LoRA adaptor
model = get_peft_model(model, lora_config)
model.print_trainable_parameters()
# trainable params: 18874368 || all params: 11154206720 || trainable%: 0.16921300163961817
```
As you can see, here we are only training 0.16% of the parameters of the model! This huge memory gain will enable us to fine-tune the model without memory issues.
Next is to create a `DataCollator` that will take care of padding our inputs and labels. We will use the `DataCollatorForSeq2Seq` from the 🤗 Transformers library.
```python
from transformers import DataCollatorForSeq2Seq
# we want to ignore tokenizer pad token in the loss
label_pad_token_id = -100
# Data collator
data_collator = DataCollatorForSeq2Seq(
tokenizer,
model=model,
label_pad_token_id=label_pad_token_id,
pad_to_multiple_of=8
)
```
The last step is to define the hyperparameters (`TrainingArguments`) we want to use for our training.
```python
from transformers import Seq2SeqTrainer, Seq2SeqTrainingArguments
output_dir="lora-flan-t5-xxl"
# Define training args
training_args = Seq2SeqTrainingArguments(
output_dir=output_dir,
auto_find_batch_size=True,
learning_rate=1e-3, # higher learning rate
num_train_epochs=5,
logging_dir=f"{output_dir}/logs",
logging_strategy="steps",
logging_steps=500,
save_strategy="no",
report_to="tensorboard",
)
# Create Trainer instance
trainer = Seq2SeqTrainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=tokenized_dataset["train"],
)
model.config.use_cache = False # silence the warnings. Please re-enable for inference!
```
Let's now train our model and run the cells below. Note that for T5, some layers are kept in `float32` for stability purposes.
```python
# train model
trainer.train()
```
The training took ~10:36:00 and cost `~13.22$` for 10h of training. For comparison a [full fine-tuning on FLAN-T5-XXL](https://www.philschmid.de/fine-tune-flan-t5-deepspeed#3-results--experiments) with the same duration (10h) requires 8x A100 40GBs and costs ~322$.
We can save our model to use it for inference and evaluate it. We will save it to disk for now, but you could also upload it to the [Hugging Face Hub](https://huggingface.co/docs/hub/main) using the `model.push_to_hub` method.
```python
# Save our LoRA model & tokenizer results
peft_model_id="results"
trainer.model.save_pretrained(peft_model_id)
tokenizer.save_pretrained(peft_model_id)
# if you want to save the base model to call
# trainer.model.base_model.save_pretrained(peft_model_id)
```
Our LoRA checkpoint is only 84MB small and includes all of the learnt knowleddge for samsum.
## 4. Evaluate & run Inference with LoRA FLAN-T5
After the training is done we want to evaluate and test it. The most commonly used metric to evaluate summarization task is [rogue_score](<https://en.wikipedia.org/wiki/ROUGE_(metric)>) short for Recall-Oriented Understudy for Gisting Evaluation). This metric does not behave like the standard accuracy: it will compare a generated summary against a set of reference summaries.
We are going to use `evaluate` library to evaluate the `rogue` score. We can run inference using `PEFT` and `transformers`. For our FLAN-T5 XXL model, we need at least 18GB of GPU memory.
```python
import torch
from peft import PeftModel, PeftConfig
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
# Load peft config for pre-trained checkpoint etc.
peft_model_id = "results"
config = PeftConfig.from_pretrained(peft_model_id)
# load base LLM model and tokenizer
model = AutoModelForSeq2SeqLM.from_pretrained(config.base_model_name_or_path, load_in_8bit=True, device_map={"":0})
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
# Load the Lora model
model = PeftModel.from_pretrained(model, peft_model_id, device_map={"":0})
model.eval()
print("Peft model loaded")
```
Let’s load the dataset again with a random sample to try the summarization.
```python
from datasets import load_dataset
from random import randrange
# Load dataset from the hub and get a sample
dataset = load_dataset("samsum")
sample = dataset['test'][randrange(len(dataset["test"]))]
input_ids = tokenizer(sample["dialogue"], return_tensors="pt", truncation=True).input_ids.cuda()
# with torch.inference_mode():
outputs = model.generate(input_ids=input_ids, max_new_tokens=10, do_sample=True, top_p=0.9)
print(f"input sentence: {sample['dialogue']}\n{'---'* 20}")
print(f"summary:\n{tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True)[0]}")
```
Nice! our model works! Now, lets take a closer look and evaluate it against the `test` set of processed dataset from `samsum`. Therefore we need to use and create some utilities to generate the summaries and group them together. The most commonly used metrics to evaluate summarization task is [rogue_score](<https://en.wikipedia.org/wiki/ROUGE_(metric)>) short for Recall-Oriented Understudy for Gisting Evaluation). This metric does not behave like the standard accuracy: it will compare a generated summary against a set of reference summaries.
```python
import evaluate
import numpy as np
from datasets import load_from_disk
from tqdm import tqdm
# Metric
metric = evaluate.load("rouge")
def evaluate_peft_model(sample,max_target_length=50):
# generate summary
outputs = model.generate(input_ids=sample["input_ids"].unsqueeze(0).cuda(), do_sample=True, top_p=0.9, max_new_tokens=max_target_length)
prediction = tokenizer.decode(outputs[0].detach().cpu().numpy(), skip_special_tokens=True)
# decode eval sample
# Replace -100 in the labels as we can't decode them.
labels = np.where(sample['labels'] != -100, sample['labels'], tokenizer.pad_token_id)
labels = tokenizer.decode(labels, skip_special_tokens=True)
# Some simple post-processing
return prediction, labels
# load test dataset from distk
test_dataset = load_from_disk("data/eval/").with_format("torch")
# run predictions
# this can take ~45 minutes
predictions, references = [] , []
for sample in tqdm(test_dataset):
p,l = evaluate_peft_model(sample)
predictions.append(p)
references.append(l)
# compute metric
rogue = metric.compute(predictions=predictions, references=references, use_stemmer=True)
# print results
print(f"Rogue1: {rogue['rouge1']* 100:2f}%")
print(f"rouge2: {rogue['rouge2']* 100:2f}%")
print(f"rougeL: {rogue['rougeL']* 100:2f}%")
print(f"rougeLsum: {rogue['rougeLsum']* 100:2f}%")
# Rogue1: 50.386161%
# rouge2: 24.842412%
# rougeL: 41.370130%
# rougeLsum: 41.394230%
```
Our PEFT fine-tuned FLAN-T5-XXL achieved a rogue1 score of `50.38%` on the test dataset. For comparison a [full fine-tuning of flan-t5-base achieved a rouge1 score of 47.23](https://www.philschmid.de/fine-tune-flan-t5). That is a `3%` improvements.
It is incredible to see that our LoRA checkpoint is only 84MB small and model achieves better performance than a smaller fully fine-tuned model.
---
Thanks for reading! If you have any questions, feel free to contact me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Setup Deep Learning environment for Hugging Face Transformers with Habana Gaudi on AWS | https://www.philschmid.de/getting-started-habana-gaudi | 2022-06-14 | [
"BERT",
"Habana",
"HuggingFace",
"Optimum"
] | Learn how to setup a Deep Learning Environment for Hugging Face Transformers with Habana Gaudi on AWS using the DL1 instance type. | This blog contains instructions for how to setup a Deep Learning Environment for Habana Gaudi on AWS using the DL1 instance type and Hugging Face libraries like [transformers](https://huggingface.co/docs/transformers/index), [optimum](https://huggingface.co/docs/optimum/index), [datasets](https://huggingface.co/docs/datasets/index). This guide will show you how to set up the development environment on the AWS cloud and get started with Hugging Face Libraries.
This guide covers:
1. [Requirements](#1-requirements)
2. [Create an AWS EC2 instance](#2-create-an-aws-ec2-instance)
3. [Connect to the instance via ssh](#3-connect-to-the-instance-via-ssh)
4. [Use Jupyter Notebook/Lab via ssh](#4-use-jupyter-notebook-lab-via-ssh)
5. [Fine-tune Hugging Face Transformers with Optimum](#5-fine-tune-hugging-face-transformers-with-optimum)
6. [Clean up](#6-clean-up)
Or you can jump to the [Conclusion](#conclusion).
Let's get started! 🚀
## 1. Requirements
Before we can start make sure you have met the following requirements
- AWS Account with quota for [DL1 instance type](https://aws.amazon.com/ec2/instance-types/dl1/)
- [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) installed
- AWS IAM user [configured in CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html) with permission to create and manage ec2 instances
## 2. Create an AWS EC2 instance
To be able to launch an EC2 instance we need to create a `key-pair` and `security-group`, which will be used to access the instance via ssh.
Configure AWS PROFILE and AWS REGION which will be used for the instance
```bash
export AWS_PROFILE=<your-aws-profile>
export AWS_DEFAULT_REGION=<your-aws-region>
```
We create a key pair using the `aws` cli and save the key into a local `.pem` file.
```bash
KEY_NAME=habana
aws ec2 create-key-pair --key-name ${KEY_NAME} --query 'KeyMaterial' --output text > ${KEY_NAME}.pem
chmod 400 ${KEY_NAME}.pem
```
Next we create a security group, which allows ssh access to the instance. We are going to use the default VPC, but this could be adjusted by changing the `vpc-id` in the `create-security-group` command.
```bash
SG_NAME=habana
DEFAULT_VPC_ID=$(aws ec2 describe-vpcs --query 'Vpcs[?isDefault==true].VpcId' --output text)
echo "Default VPC ID: ${DEFAULT_VPC_ID}"
SECURITY_GROUP_ID=$(aws ec2 create-security-group --group-name ${SG_NAME}-sg --description "SG for Habana Deep Learning" --vpc-id ${DEFAULT_VPC_ID} --output text)
echo "Security Group ID: ${SECURITY_GROUP_ID}"
echo $(aws ec2 authorize-security-group-ingress --group-id ${SECURITY_GROUP_ID} --protocol tcp --port 22 --cidr 0.0.0.0/0 --output text)
```
We completed all necessary steps to start our DL1 Habana Gaudi instance in a secure environment. We are going to use the community AMI created and managed by Habana, which is identical to the marketplace. The community AMI doesn't require an opt-in first. if you want to use the official marketplace image you have to subscribe on the UI first and then you can access it with the following command `AMI_ID=$(aws ec2 describe-images --filters "Name=name,Values=* Habana Deep Learning Base AMI (Ubuntu 20.*" --query 'Images[0].ImageId' --output text)`.
```bash
AMI_ID=$(aws ec2 describe-images --filters "Name=name,Values=*habanalabs-base-ubuntu20.04*" --query 'Images[0].ImageId' --output text)
echo "AMI ID: ${AMI_ID}"
INSTANCE_TYPE=dl1.24xlarge
INSTANCE_NAME=habana
aws ec2 run-instances \
--image-id ${AMI_ID} \
--key-name ${KEY_NAME} \
--count 1 \
--instance-type ${INSTANCE_TYPE} \
--security-group-ids ${SECURITY_GROUP_ID} \
--block-device-mappings 'DeviceName=/dev/sda1,Ebs={VolumeSize=150}' \
--tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=${INSTANCE_NAME}-demo}]"
```
_P.S. you can also use the `start_instance.sh` script from [Github repository](https://github.com/philschmid/deep-learning-habana-huggingface) which does all of the steps above._
## 3. Connect to the instance via ssh
After around 45-60 seconds we can connect to the Habana Gaudi instance via ssh. We will use the following command to get the public IP and then ssh into the machine using the earlier created key pair.
```bash
INSTANCE_NAME=habana
PUBLIC_DOMAIN=$(aws ec2 describe-instances --profile sandbox \
--filters Name=tag-value,Values=${INSTANCE_NAME}-demo \
--query 'Reservations[*].Instances[*].PublicDnsName' \
--output text)
ssh -i ${KEY_NAME}.pem ubuntu@${PUBLIC_DOMAIN//[$'\t\r\n ']}
```
Lets see if we can access the Gaudi devices. Habana provides a similar CLI tool like `nvidia-smi` with `hl-smi` command.
You can find more documentation [here](https://docs.habana.ai/en/latest/Management_and_Monitoring/System_Management_Tools_Guide/System_Management_Tools.html).
```bash
hl-smi
```
You should see a similar output to the one below.
![hl-smi](/static/blog/getting-started-habana-gaudi/hl-smi.png)
We can also test if we can allocate the `hpu` device in `PyTorch`. Therefore we will pull the latest docker image with torch installed and run `python3` with the code snippet below. A more detailed guide can be found in [Porting a Simple PyTorch Model to Gaudi](https://docs.habana.ai/en/latest/PyTorch/Migration_Guide/Porting_Simple_PyTorch_Model_to_Gaudi.html).
start docker container with torch installed
```bash
docker run -it --runtime=habana -e HABANA_VISIBLE_DEVICES=all -e OMPI_MCA_btl_vader_single_copy_mechanism=none --cap-add=sys_nice --net=host --ipc=host vault.habana.ai/gaudi-docker/1.5.0/ubuntu20.04/habanalabs/pytorch-installer-1.11.0:latest
```
Start a python3 session with `python3` and execute the code below
```python
import torch_hpu
print(f"device available:{torch_hpu.is_available()}")
print(f"device_count:{torch_hpu.device_count()}")
```
## 4. Use Jupyter Notebook/Lab via ssh
Connecting via ssh works as expected, but who likes to develop inside a terminal? In this section we will learn how to install `Jupyter` and `Jupyter Notebooks/Lab` and how to connect to have a better machine learning environment thant just a terminal. But for this to work we need to add port for fowarding in the ssh connection to be able to open it in the browser.
As frist we need to create a new `ssh` connection with port fowarding to port an from `8888`:
```bash
INSTANCE_NAME=habana
PUBLIC_DOMAIN=$(aws ec2 describe-instances --profile sandbox \
--filters Name=tag-value,Values=${INSTANCE_NAME}-demo \
--query 'Reservations[*].Instances[*].PublicDnsName' \
--output text)
ssh -L 8888:localhost:8888 -i ${KEY_NAME}.pem ubuntu@${PUBLIC_DOMAIN//[$'\t\r\n ']}
```
After we are connected we are again we are again starting our container with a mounted volume to not lose our data later.
```bash
docker run -it --runtime=habana -e HABANA_VISIBLE_DEVICES=all -e OMPI_MCA_btl_vader_single_copy_mechanism=none --cap-add=sys_nice --net=host --ipc=host -v /home/ubuntu:/home/ubuntu -w /home/ubuntu vault.habana.ai/gaudi-docker/1.5.0/ubuntu20.04/habanalabs/pytorch-installer-1.11.0:latest
```
Next and last step is to install and run `jupyter`
```bash
pip install jupyter
jupyter notebook --allow-root
```
You should see a familiar jupyter output with a url to the notebook.
```bash
http://localhost:8888/?token=c7a150a559c3e9d6d48d285f7023a341aaf94dac994d787d
```
We can click on it and a jupyter environment opens in our local browser.
![jupyter](/static/blog/getting-started-habana-gaudi/jupyter.png)
We can now run similar tests as via the terminal. Therefore create a new notebook and run the following code:
```python
import torch_hpu
print(f"device available:{torch_hpu.is_available()}")
print(f"device_count:{torch_hpu.device_count()}")
```
![jupyter_devices](/static/blog/getting-started-habana-gaudi/jupyter_devices.png)
## 5. Fine-tune Hugging Face Transformers with Optimum
Our development environments are set up. Now let's install and test the Hugging Face Transformers on habana. To do this we simply install the [transformers](https://github.com/huggingface/transformers) and [optimum[habana]](https://github.com/huggingface/optimum-habana) packages via `pip`.
```bash
pip install transformers datasets
pip install git+https://github.com/huggingface/optimum-habana.git # workaround until release of optimum-habana
```
After we have installed the packages we can start fine-tuning a transformers model with the `optimum` package. Below you can find a simplified example fine-tuning `bert-base-uncased` model on the `emotion` dataset for `text-classification` task. This is a very simplified example, which only uses 1 Gaudi Processor instead of 8 and the `TrainingArguments` are not optimized.
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
from datasets import load_dataset
from optimum.habana import GaudiTrainer, GaudiTrainingArguments
# load pre-trained model and tokenizer
model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased", num_labels=6)
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
# load dataset
dataset = load_dataset("emotion")
# preprocess dataset
def tokenize_function(examples):
return tokenizer(examples["text"], padding="max_length", truncation=True)
tokenized_datasets = dataset.map(tokenize_function, batched=True)
# define Gaudi Training Arguments
training_args = GaudiTrainingArguments(
output_dir=".",
use_habana=True,
use_lazy_mode=True,
gaudi_config_name="Habana/bert-base-uncased",
per_device_train_batch_size=48
)
# Initialize our Trainer
trainer = GaudiTrainer(
model=model,
args=training_args,
train_dataset=tokenized_datasets["train"],
tokenizer=tokenizer,
)
# Run training
trainer.train()
```
![fine-tuning](/static/blog/getting-started-habana-gaudi/fine-tuning.png)
_We will create a more detailed guide on how to leverage the habana instances in the near future._
## 6. Clean up
To make sure we stop/delete everything we created you can follow the steps below.
1. Terminate the ec2 instance
```Bash
INSTANCE_NAME=habana
aws ec2 terminate-instances --instance-ids $(aws ec2 describe-instances --filters "Name=tag:Name,Values=${INSTANCE_NAME}-demo" --query 'Reservations[*].Instances[*].InstanceId' --output text) \
2>&1 > /dev/null
```
2. Delete security group. _can be delete once the instance is terminated_
```bash
SG_NAME=habana
aws ec2 delete-security-group --group-name ${SG_NAME}-sg
```
3. Delete key pair _can be delete once the instance is terminated_
```bash
KEY_NAME=habana
aws ec2 delete-key-pair --key-name ${KEY_NAME}
rm ${KEY_NAME}.pem
```
## 7. Conclusion
That's it now you can start using Habana for running your Deep Learning Workload with Hugging Face Transformers. We walked through how to set up a development enviroment for Habana Gaudi via the terminal or with a jupyter environment. In addition to this, you can use `vscode` via [Remote Development using SSH](https://code.visualstudio.com/docs/remote/ssh) to connect to your instance and run your code.
The next step is to create an advanced guide for Hugging Face Transformers with Habana Gaudi to learn on how to use distributed training, configure optimized `TrainingArguments` and fine-tune & pre-train transformer models. Stay tuned!🚀
Until then you can check-out more examples in the [optimum-habana](https://github.com/huggingface/optimum-habana/tree/main/examples) respository.
---
Thanks for reading! If you have any questions, feel free to contact me, through [Github](https://github.com/huggingface/transformers), or on the [forum](https://discuss.huggingface.co/c/optimum/59). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Workshop: Enterprise-Scale NLP with Hugging Face & Amazon SageMaker | https://www.philschmid.de/hugginface-sagemaker-workshop | 2021-12-29 | [
"HuggingFace",
"AWS",
"SageMaker"
] | In October and November, we held a workshop series on “Enterprise-Scale NLP with Hugging Face & Amazon SageMaker”. This workshop series consisted out of 3 parts and covers: Getting Started, Going Production & MLOps. | Earlier this year we announced a strategic collaboration with Amazon to make it easier for companies to use Hugging Face Transformers in Amazon SageMaker, and ship cutting-edge Machine Learning features faster. We introduced new Hugging Face Deep Learning Containers (DLCs) to train and deploy Hugging Face Transformers in Amazon SageMaker.
In addition to the Hugging Face Inference DLCs, we created a [Hugging Face Inference Toolkit for SageMaker](https://github.com/aws/sagemaker-huggingface-inference-toolkit). This Inference Toolkit leverages the `pipelines` from the `transformers` library to allow zero-code deployments of models, without requiring any code for pre-or post-processing.
In October and November, we held a workshop series on “**Enterprise-Scale NLP with Hugging Face & Amazon SageMaker**”. This workshop series consisted out of 3 parts and covers:
- Getting Started with Amazon SageMaker: Training your first NLP Transformer model with Hugging Face and deploying it
- Going Production: Deploying, Scaling & Monitoring Hugging Face Transformer models with Amazon SageMaker
- MLOps: End-to-End Hugging Face Transformers with the Hub & SageMaker Pipelines
We recorded all of them so you are now able to do the whole workshop series on your own to enhance your Hugging Face Transformers skills with Amazon SageMaker or vice-versa.
Below you can find all the details of each workshop and how to get started.
⚙ Github Repository: [huggingface-sagemaker-workshop-series](https://github.com/philschmid/huggingface-sagemaker-workshop-series)
📺 Youtube Playlist: [Hugging Face SageMaker Playlist](https://www.youtube.com/playlist?list=PLo2EIpI_JMQtPhGR5Eo2Ab0_Vb89XfhDJ)
_Note: The Repository contains instructions on how to access a temporary AWS, which was available during the workshops. To be able to do the workshop now you need to use your own or your company AWS Account._
In Addition to the workshop we created a fully dedicated [Documentation](https://huggingface.co/docs/sagemaker/main) for Hugging Face and Amazon SageMaker, which includes all the necessary information.
If the workshop is not enough for you we also have 15 additional getting samples [Notebook Github repository](https://github.com/huggingface/notebooks/tree/master/sagemaker), which cover topics like distributed training or leveraging [Spot Instances](https://aws.amazon.com/ec2/spot/?nc1=h_ls&cards.sort-by=item.additionalFields.startDateTime&cards.sort-order=asc).
## Workshop 1: **Getting Started with Amazon SageMaker: Training your first NLP Transformer model with Hugging Face and deploying it**
In Workshop 1 you will learn how to use Amazon SageMaker to train a Hugging Face Transformer model and deploy it afterwards.
- Prepare and upload a test dataset to S3
- Prepare a fine-tuning script to be used with Amazon SageMaker Training jobs
- Launch a training job and store the trained model into S3
- Deploy the model after successful training
⚙ Code Assets: [workshop_1_getting_started_with_amazon_sagemaker](https://github.com/philschmid/huggingface-sagemaker-workshop-series/tree/main/workshop_1_getting_started_with_amazon_sagemaker)
📺 Youtube: [workshop_1_getting_started_with_amazon_sagemaker](https://www.youtube.com/watch?v=pYqjCzoyWyo&list=PLo2EIpI_JMQtPhGR5Eo2Ab0_Vb89XfhDJ&index=6&t=5s&ab_channel=HuggingFace)
---
## Workshop 2: **Going Production: Deploying, Scaling & Monitoring Hugging Face Transformer models with Amazon SageMaker**
In Workshop 2 learn how to use Amazon SageMaker to deploy, scale & monitor your Hugging Face Transformer models for production workloads.
- Run Batch Prediction on JSON files using a Batch Transform
- Deploy a model from [hf.co/models](https://hf.co/models) to Amazon SageMaker and run predictions
- Configure autoscaling for the deployed model
- Monitor the model to see avg. request time and set up alarms
⚙ Code Assets: [workshop_2_going_production](https://github.com/philschmid/huggingface-sagemaker-workshop-series/tree/main/workshop_2_going_production)
📺 Youtube: [workshop_2_going_production](https://www.youtube.com/watch?v=whwlIEITXoY&list=PLo2EIpI_JMQtPhGR5Eo2Ab0_Vb89XfhDJ&index=6&t=61s)
---
## Workshop 3: **MLOps: End-to-End Hugging Face Transformers with the Hub & SageMaker Pipelines**
In Workshop 3 learn how to build an End-to-End MLOps Pipeline for Hugging Face Transformers from training to production using Amazon SageMaker.
We are going to create an automated SageMaker Pipeline which:
- processes a dataset and uploads it to s3
- fine-tunes a Hugging Face Transformer model with the processed dataset
- evaluates the model against an evaluation set
- deploys the model if it performed better than a certain threshold
⚙ Code Assets: [workshop_3_mlops](https://github.com/philschmid/huggingface-sagemaker-workshop-series/tree/main/workshop_3_mlops)
📺 Youtube: [workshop_3_mlops](https://www.youtube.com/watch?v=XGyt8gGwbY0&list=PLo2EIpI_JMQtPhGR5Eo2Ab0_Vb89XfhDJ&index=7)
---
# Next Steps
We are planning to continue our workshops in early 2022 to build solution-oriented applications using Hugging Face Transformers, AWS & Amazon SageMaker. If you have an idea or a certain wish about something we should cover please open a thread on the forum: [https://discuss.huggingface.co/c/sagemaker/17](https://discuss.huggingface.co/c/sagemaker/17).
If you want to learn about Hugging Face Transformers on Amazon SageMaker you can checkout our Amazon SageMaker documentation at: https://huggingface.co/docs/sagemaker/main
Or jump into on of our samples at: https://github.com/huggingface/notebooks/tree/master/sagemaker
---
Thanks for reading. If you have any questions, feel free to contact me, through [Github](https://github.com/huggingface/transformers), or on the [forum](https://discuss.huggingface.co/c/sagemaker/17). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
Fine-tune a non-English GPT-2 Model with Huggingface | https://www.philschmid.de/fine-tune-a-non-english-gpt-2-model-with-huggingface | 2020-09-06 | [
"NLP",
"GPT-2",
"Huggingface"
] | Fine-tune non-English, German GPT-2 model with Huggingface on German recipes. Using their Trainer class and Pipeline objects. | Unless you’re living under a rock, you probably have heard about [OpenAI](https://openai.com/)'s GPT-3 language model.
You might also have seen all the crazy demos, where the model writes `JSX`, `HTML` code, or its capabilities in the area
of zero-shot / few-shot learning. [Simon O'Regan](https://twitter.com/Simon_O_Regan) wrote an
[article with excellent demos and projects built on top of GPT-3](https://towardsdatascience.com/gpt-3-demos-use-cases-implications-77f86e540dc1).
A Downside of GPT-3 is its 175 billion parameters, which results in a model size of around 350GB. For comparison, the
biggest implementation of the GPT-2 iteration has 1,5 billion parameters. This is less than 1/116 in size.
In fact, with close to 175B trainable parameters, GPT-3 is much bigger in terms of size in comparison to any other model
else out there. Here is a comparison of the number of parameters of recent popular NLP models, GPT-3 clearly stands out.
![model-comparison](/static/blog/fine-tune-a-non-english-gpt-2-model-with-huggingface/models.svg)
This is all magnificent, but you do not need 175 billion parameters to get good results in `text-generation`.
There are already tutorials on how to fine-tune GPT-2. But a lot of them are obsolete or outdated. In this tutorial, we
are going to use the `transformers` library by [Huggingface](https://huggingface.co/) in their newest version (3.1.0).
We will use the new `Trainer` class and fine-tune our GPT-2 Model with German recipes from
[chefkoch.de](http://chefkoch.de).
You can find everything we are doing in this
[colab notebook.](https://colab.research.google.com/github/philschmid/fine-tune-GPT-2/blob/master/Fine_tune_a_non_English_GPT_2_Model_with_Huggingface.ipynb)
---
## Transformers Library by [Huggingface](https://huggingface.co/)
![/static/blog/fine-tune-a-non-english-gpt-2-model-with-huggingface/transformers-logo](/static/blog/fine-tune-a-non-english-gpt-2-model-with-huggingface/transformers-logo.png)
The [Transformers library](https://github.com/huggingface/transformers) provides state-of-the-art machine learning
architectures like BERT, GPT-2, RoBERTa, XLM, DistilBert, XLNet, T5 for Natural Language Understanding (NLU), and
Natural Language Generation (NLG). It also provides thousands of pre-trained models in 100+ different languages and is
deeply interoperable between PyTorch & TensorFlow 2.0. It enables developers to fine-tune machine learning models for
different NLP-tasks like text classification, sentiment analysis, question-answering, or text generation.
---
## Tutorial
In the tutorial, we fine-tune a German GPT-2 from the [Huggingface model hub](https://huggingface.co/models). As data,
we use the [German Recipes Dataset](https://www.kaggle.com/sterby/german-recipes-dataset), which consists of 12190
german recipes with metadata crawled from [chefkoch.de](http://chefkoch.de/).
We will use the recipe Instructions to fine-tune our GPT-2 model and let us write recipes afterwards that we can cook.
![colab-snippet](/static/blog/fine-tune-a-non-english-gpt-2-model-with-huggingface/colab-snippet.png)
We use a Google Colab with a GPU runtime for this tutorial. If you are not sure how to use a GPU Runtime take a look
[here](https://www.philschmid.de/google-colab-the-free-gpu-tpu-jupyter-notebook-service).
**What are we going to do:**
- load the dataset from Kaggle
- prepare the dataset and build a `TextDataset`
- initialize `Trainer` with `TrainingArguments` and GPT-2 model
- train and save the model
- test the model
You can find everything we do in this
[colab notebook.](https://colab.research.google.com/github/philschmid/fine-tune-GPT-2/blob/master/Fine_tune_a_non_English_GPT_2_Model_with_Huggingface.ipynb)
---
## Load the dataset from Kaggle
As already mentioned in the introduction of the tutorial we use the
"[German Recipes Dataset](https://www.kaggle.com/sterby/german-recipes-dataset)" dataset from Kaggle. The dataset
consists of 12190 german recipes with metadata crawled from [chefkoch.de](http://chefkoch.de/). In this example, we only
use the Instructions of the recipes. We download the dataset by using the "Download" button and upload it to our colab
notebook since it only has a zipped size of 4,7MB.
![kaggle-dataset](/static/blog/fine-tune-a-non-english-gpt-2-model-with-huggingface/kaggle-dataset.png)
```python
#upload files to your colab environment
from google.colab import files
uploaded = files.upload()
#132879_316218_bundle_archive.zip(application/zip) - 4749666 bytes, last modified: 29.8.2020 - 100% done
#Saving 132879_316218_bundle_archive.zip to 132879_316218_bundle_archive.zip
```
After we uploaded the file we use `unzip` to extract the `recipes.json` .
```python
!unzip '132879_316218_bundle_archive.zip'
#Archive: 132879_316218_bundle_archive.zip
#inflating: recipes.json
```
_You also could use the `kaggle` CLI to download the dataset, but be aware you need your Kaggle credentials in the colab
notebook._
```python
kaggle datasets download -d sterby/german-recipes-dataset
```
here an example of a recipe.
```json
{
"Url": "https://www.chefkoch.de/rezepte/2718181424631245/",
"Instructions": "Vorab folgende Bemerkung: Alle Mengen sind Circa-Angaben und können nach Geschmack variiert werden!Das Gemüse putzen und in Stücke schneiden (die Tomaten brauchen nicht geschält zu werden!). Alle Zutaten werden im Mixer püriert, das muss wegen der Mengen in mehreren Partien geschehen, und zu jeder Partie muss auch etwas von der Brühe gegeben werden. Auch das Toastbrot wird mitpüriert, es dient der Bindung. Am Schluss lässt man das \u00d6l bei laufendem Mixer einflie\u00dfen. In einer gro\u00dfen Schüssel alles gut verrühren und für mindestens eine Stunde im Kühlschrank gut durchkühlen lassen.Mit frischem Baguette an hei\u00dfen Tagen ein Hochgenuss.Tipps: Wer mag, kann in kleine Würfel geschnittene Tomate, Gurke und Zwiebel separat dazu reichen.Die Suppe eignet sich hervorragend zum Einfrieren, so dass ich immer diese gro\u00dfe Menge zubereite, um den Arbeitsaufwand gering zu halten.",
"Ingredients": [
"1 kg Strauchtomate(n)",
"1 Gemüsezwiebel(n)",
"1 Salatgurke(n)",
"1 Paprikaschote(n) nach Wahl",
"6 Zehe/n Knoblauch",
"1 Chilischote(n)",
"15 EL Balsamico oder Weinessig",
"6 EL Olivenöl",
"4 Scheibe/n Toastbrot",
"Salz und Pfeffer",
"1 kl. Dose/n Tomate(n), geschälte, oder 1 Pck. pürierte Tomaten",
"1/2Liter Brühe, kalte"
],
"Day": 1,
"Name": "Pilz Stroganoff",
"Year": 2017,
"Month": "July",
"Weekday": "Saturday"
}
```
## Prepare the dataset and build a `TextDataset`
The next step is to extract the instructions from all recipes and build a `TextDataset`. The `TextDataset` is a custom
implementation of the
[Pytroch `Dataset` class](https://pytorch.org/tutorials/beginner/data_loading_tutorial.html#dataset-class) implemented
by the transformers library. If you want to know more about `Dataset` in Pytorch you can check out this
[youtube video](https://www.youtube.com/watch?v=PXOzkkB5eH0&ab_channel=PythonEngineer).
First, we split the `recipes.json` into a `train` and `test` section. Then we extract `Instructions` from the recipes
and write them into a `train_dataset.txt` and `test_dataset.txt`
```python
import re
import json
from sklearn.model_selection import train_test_split
with open('recipes.json') as f:
data = json.load(f)
def build_text_files(data_json, dest_path):
f = open(dest_path, 'w')
data = ''
for texts in data_json:
summary = str(texts['Instructions']).strip()
summary = re.sub(r"\s", " ", summary)
data += summary + " "
f.write(data)
train, test = train_test_split(data,test_size=0.15)
build_text_files(train,'train_dataset.txt')
build_text_files(test,'test_dataset.txt')
print("Train dataset length: "+str(len(train)))
print("Test dataset length: "+ str(len(test)))
#Train dataset length: 10361
#Test dataset length: 1829
```
The next step is to download the tokenizer. We use the tokenizer from the `german-gpt2` model.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("anonymous-german-nlp/german-gpt2")
train_path = 'train_dataset.txt'
test_path = 'test_dataset.txt'
```
Now we can build our `TextDataset`. Therefore we create a `TextDataset` instance with the `tokenizer` and the path to
our datasets. We also create our `data_collator`, which is used in training to form a batch from our dataset.
```python
from transformers import TextDataset,DataCollatorForLanguageModeling
def load_dataset(train_path,test_path,tokenizer):
train_dataset = TextDataset(
tokenizer=tokenizer,
file_path=train_path,
block_size=128)
test_dataset = TextDataset(
tokenizer=tokenizer,
file_path=test_path,
block_size=128)
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer, mlm=False,
)
return train_dataset,test_dataset,data_collator
train_dataset,test_dataset,data_collator = load_dataset(train_path,test_path,tokenizer)
```
---
## Initialize `Trainer` with `TrainingArguments` and GPT-2 model
The [Trainer](https://huggingface.co/transformers/main_classes/trainer.html#transformers.Trainer) class provides an API
for feature-complete training. It is used in most of
the [example scripts](https://huggingface.co/transformers/examples.html) from Huggingface. Before we can instantiate our
`Trainer` we need to download our GPT-2 model and create
[TrainingArguments](https://huggingface.co/transformers/main_classes/trainer.html#transformers.TrainingArguments). The
`TrainingArguments` are used to define the Hyperparameters, which we use in the training process like the
`learning_rate`, `num_train_epochs`, or `per_device_train_batch_size`. You can find a complete list
[here](https://huggingface.co/transformers/main_classes/trainer.html#trainingarguments).
```python
from transformers import Trainer, TrainingArguments, AutoModelWithLMHead
model = AutoModelWithLMHead.from_pretrained("anonymous-german-nlp/german-gpt2")
training_args = TrainingArguments(
output_dir="./gpt2-gerchef", #The output directory
overwrite_output_dir=True, #overwrite the content of the output directory
num_train_epochs=3, # number of training epochs
per_device_train_batch_size=32, # batch size for training
per_device_eval_batch_size=64, # batch size for evaluation
eval_steps = 400, # Number of update steps between two evaluations.
save_steps=800, # after # steps model is saved
warmup_steps=500,# number of warmup steps for learning rate scheduler
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=train_dataset,
eval_dataset=test_dataset,
prediction_loss_only=True,
)
```
---
## Train and Save the model
To train the model we can simply run `trainer.train()`.
```python
trainer.train()
```
After training is done you can save the model by calling `save_model()`. This will save the trained model to our
`output_dir` from our `TrainingArguments`.
```python
trainer.save_model()
```
---
## Test the model
To test the model we use another
[highlight of the transformers library](https://huggingface.co/transformers/main_classes/pipelines.html?highlight=pipelines)
called `pipeline`. [Pipelines](https://huggingface.co/transformers/main_classes/pipelines.html?highlight=pipelines) are
objects that offer a simple API dedicated to several tasks, `text-generation` amongst others.
```python
from transformers import pipeline
chef = pipeline('text-generation',model='./gpt2-gerchef', tokenizer='anonymous-german-nlp/german-gpt2',config={'max_length':800})
result = chef('Zuerst Tomaten')[0]['generated_text']
```
result:
"_Zuerst Tomaten dazu geben und 2 Minuten kochen lassen. Die Linsen ebenfalls in der Brühe anbrühen.Die Tomaten
auspressen. Mit der Butter verrühren. Den Kohl sowie die Kartoffeln andünsten, bis sie weich sind. "_
Well, thats it. We've done it👨🏻🍳. We have successfully fine-tuned our gpt-2 model to write us recipes.
To improve our results we could train it longer and adjust our `TrainingArguments` or enlarge the dataset.
---
You can find everything in this
[colab notebook.](https://colab.research.google.com/github/philschmid/fine-tune-GPT-2/blob/master/Fine_tune_a_non_English_GPT_2_Model_with_Huggingface.ipynb)
Thanks for reading. If you have any questions, feel free to contact me or comment on this article. You can also connect
with me on [Twitter](https://twitter.com/_philschmid) or
[LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
An Amazon SageMaker Inference comparison with Hugging Face Transformers | https://www.philschmid.de/sagemaker-inference-comparison | 2022-05-17 | [
"HuggingFace",
"AWS",
"BERT",
"SageMaker"
] | Learn about the different existing Amazon SageMaker Inference options and and how to use them. | _"Amazon SageMaker is a fully managed machine learning service. With SageMaker, data scientists and developers can quickly and easily build and train machine learning models, and then directly deploy them into a production-ready hosted environment."_ - [AWS Documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/whatis.html)
As of today, Amazon SageMaker offers 4 different inference options with:
- [Real-Time inference](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints.html)
- [Batch Transform](https://docs.aws.amazon.com/sagemaker/latest/dg/batch-transform.html)
- [Asynchronous Inference](https://docs.aws.amazon.com/sagemaker/latest/dg/async-inference.html)
- [Serverless Inference](https://docs.aws.amazon.com/sagemaker/latest/dg/serverless-endpoints.html)
Each of these inference options has different characteristics and use cases. Therefore we have created a table to compare the current existing SageMaker inference in latency, execution period, payload, size, and pricing and getting-started examples on how to use each of the inference options.
**Comparison table**
| Option | latency budget | execution period | max payload size | real-world example | accelerators (GPU) | pricing |
| --------------- | -------------- | ----------------------- | ---------------- | ----------------------- | ------------------ | ------------------------------------------------------------- |
| real-time | milliseconds | constantly | 6MB | route estimation | Yes | up time of the endpoint |
| batch transform | hours | ones a day/week | Unlimited | nightly embedding jobs | Yes | prediction (transform) time |
| async inference | minutes | every few minutes/hours | 1GB | post-call transcription | Yes | up time of the endpoint, can sacle to 0 when there is no load |
| serverless | seconds | every few minutes | 6MB | PoC for classification | No | compute time (serverless) |
**Examples**
You will learn how to:
1. Deploy a Hugging Face Transformers For Real-Time inference.
2. Deploy a Hugging Face Transformers for Batch Transform Inference.
3. Deploy a Hugging Face Transformers for Asynchronous Inference.
4. Deploy a Hugging Face Transformers for Serverless Inference.
---
_If you are going to use Sagemaker in a local environment (not SageMaker Studio or Notebook Instances). You need access to an IAM Role with the required permissions for Sagemaker. You can find [here](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) more about it._
## Permissions
_If you are going to use Sagemaker in a local environment. You need access to an IAM Role with the required permissions for Sagemaker. You can find [here](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) more about it._
```python
!pip install "sagemaker>=2.48.0" --upgrade
import sagemaker
import boto3
sess = sagemaker.Session()
# sagemaker session bucket -> used for uploading data, models and logs
# sagemaker will automatically create this bucket if it not exists
sagemaker_session_bucket=None
if sagemaker_session_bucket is None and sess is not None:
# set to default bucket if a bucket name is not given
sagemaker_session_bucket = sess.default_bucket()
try:
role = sagemaker.get_execution_role()
except ValueError:
iam = boto3.client('iam')
role = iam.get_role(RoleName='sagemaker_execution_role')['Role']['Arn']
sess = sagemaker.Session(default_bucket=sagemaker_session_bucket)
print(f"sagemaker role arn: {role}")
print(f"sagemaker bucket: {sess.default_bucket()}")
print(f"sagemaker session region: {sess.boto_region_name}")
```
## [SageMaker Hugging Face Inference Toolkit](https://github.com/aws/sagemaker-huggingface-inference-toolkit)
The SageMaker Hugging Face Inference Toolkit is an open-source library for serving 🤗 Transformers models on Amazon SageMaker. This library provides default pre-processing, predict and postprocessing for certain 🤗 Transformers models and tasks using the `transformers pipelines`.
The Inference Toolkit accepts inputs in the `inputs` key, and supports additional pipelines `parameters` in the parameters key. You can provide any of the supported kwargs from `pipelines` as `parameters`.
Tasks supported by the Inference Toolkit API include:
- **`text-classification`**
- **`sentiment-analysis`**
- **`token-classification`**
- **`feature-extraction`**
- **`fill-mask`**
- **`summarization`**
- **`translation_xx_to_yy`**
- **`text2text-generation`**
- **`text-generation`**
- **`audio-classificatin`**
- **`automatic-speech-recognition`**
- **`conversational`**
- **`image-classification`**
- **`image-segmentation`**
- **`object-detection`**
- **`table-question-answering`**
- **`zero-shot-classification`**
- **`zero-shot-image-classification`**
See the following request examples for some of the tasks:
**text-classification**
```python
{
"inputs": "Camera - You are awarded a SiPix Digital Camera! call 09061221066 fromm landline. Delivery within 28 days."
}
```
**text-generation parameterized**
```python
{
"inputs": "Hugging Face, the winner of VentureBeat’s Innovation in Natural Language Process/Understanding Award for 2021, is looking to level the playing field. The team, launched by Clément Delangue and Julien Chaumond in 2016, was recognized for its work in democratizing NLP, the global market value for which is expected to hit $35.1 billion by 2026. This week, Google’s former head of Ethical AI Margaret Mitchell joined the team.",
"parameters": {
"repetition_penalty": 4.0,
"length_penalty": 1.5
}
}
```
More documentation and a list of supported tasks can be found in the [documentation](https://huggingface.co/docs/sagemaker/reference#inference-toolkit-api).
## 1. Deploy a Hugging Face Transformers For Real-Time inference.
### What are Amazon SageMaker Real-Time Endpoints?
Real-time inference is ideal for inference workloads where you have real-time, interactive, low latency requirements. You can deploy your model to SageMaker hosting services and get an endpoint that can be used for inference. These endpoints are fully managed and support autoscaling.
**Deploying a model using SageMaker hosting services is a three-step process:**
1. **Create a model in SageMaker** —By creating a model, you tell SageMaker where it can find the model components.
2. **Create an endpoint configuration for an HTTPS endpoint** —You specify the name of one or more models in production variants and the ML compute instances that you want SageMaker to launch to host each production variant.
3. **Create an HTTPS endpoint** —Provide the endpoint configuration to SageMaker. The service launches the ML compute instances and deploys the model or models as specified in the configuration
![endpoint-overview](/static/blog/sagemaker-inference-comparison/sm-endpoint.png)
### Deploy a Hugging Face Transformer from the [Hub](hf.co/models)
Detailed Notebook: [deploy_model_from_hf_hub](https://github.com/huggingface/notebooks/blob/main/sagemaker/11_deploy_model_from_hf_hub/deploy_transformer_model_from_hf_hub.ipynb)
To deploy a model directly from the Hub to SageMaker we need to define 2 environment variables when creating the `HuggingFaceModel` . We need to define:
- `HF_MODEL_ID`: defines the model id, which will be automatically loaded from [huggingface.co/models](http://huggingface.co/models) when creating or SageMaker Endpoint. The 🤗 Hub provides +14 000 models all available through this environment variable.
- `HF_TASK`: defines the task for the used 🤗 Transformers pipeline. A full list of tasks can be find [here](https://huggingface.co/docs/sagemaker/reference#inference-toolkit-api).
```python
from sagemaker.huggingface import HuggingFaceModel
# Hub Model configuration. https://huggingface.co/models
hub = {
'HF_MODEL_ID':'distilbert-base-uncased-distilled-squad', # model_id from hf.co/models
'HF_TASK':'question-answering' # NLP task you want to use for predictions
}
# create Hugging Face Model Class
huggingface_model_rth = HuggingFaceModel(
env=hub, # hugging face hub configuration
role=role, # iam role with permissions to create an Endpoint
transformers_version="4.17", # transformers version used
pytorch_version="1.10", # pytorch version used
py_version="py38", # python version of the DLC
)
# deploy model to SageMaker Inference
predictor_rth = huggingface_model_rth.deploy(
initial_instance_count=1,
instance_type="ml.g4dn.xlarge"
)
```
After model is deployed we can use the `predictor` to send requests.
```python
# example request, you always need to define "inputs"
data = {
"inputs": {
"question": "What is used for inference?",
"context": "My Name is Philipp and I live in Nuremberg. This model is used with sagemaker for inference."
}
}
# request
predictor_rth.predict(data)
```
We can easily delete the endpoint again with the following command:
```python
# delete endpoint
predictor_rth.delete_model()
predictor_rth.delete_endpoint()
```
### Deploy a Hugging Face Transformer from the [Hub](hf.co/models)
Detailed Notebook: [deploy_model_from_s3](https://github.com/huggingface/notebooks/blob/main/sagemaker/10_deploy_model_from_s3/deploy_transformer_model_from_s3.ipynb)
To deploy a model directly from the Hub to SageMaker we need to define 2 environment variables when creating the `HuggingFaceModel` . We need to define:
- `HF_MODEL_ID`: defines the model id, which will be automatically loaded from [huggingface.co/models](http://huggingface.co/models) when creating or SageMaker Endpoint. The 🤗 Hub provides +14 000 models all available through this environment variable.
- `HF_TASK`: defines the task for the used 🤗 Transformers pipeline. A full list of tasks can be find [here](https://huggingface.co/docs/sagemaker/reference#inference-toolkit-api).
```python
from sagemaker.huggingface import HuggingFaceModel
# create Hugging Face Model Class
huggingface_model_rts3 = HuggingFaceModel(
model_data="s3://hf-sagemaker-inference/model.tar.gz", # path to your trained sagemaker model
role=role, # iam role with permissions to create an Endpoint
transformers_version="4.17", # transformers version used
pytorch_version="1.10", # pytorch version used
py_version="py38", # python version of the DLC
)
# deploy model to SageMaker Inference
predictor_rts3 = huggingface_model_rts3.deploy(
initial_instance_count=1,
instance_type="ml.m5.xlarge"
)
```
After model is deployed we can use the `predictor` to send requests.
```python
# example request, you always need to define "inputs"
data = {
"inputs": "The new Hugging Face SageMaker DLC makes it super easy to deploy models in production. I love it!"
}
# request
predictor_rts3.predict(data)
# [{'label': 'POSITIVE', 'score': 0.9996660947799683}]
```
We can easily delete the endpoint again with the following command:
```python
# delete endpoint
predictor_rts3.delete_model()
predictor_rts3.delete_endpoint()
```
## 2. Deploy a Hugging Face Transformers for Batch Transform Inference.
Detailed Notebook: [batch_transform_inference](https://github.com/huggingface/notebooks/blob/main/sagemaker/12_batch_transform_inference/sagemaker-notebook.ipynb)
### What is Amazon SageMaker Batch Transform?
A Batch transform job uses a trained model to get inferences on a dataset and saves these results to an Amazon S3 location that you specify. Similar to real-time hosting it creates a web server that takes in HTTP POST but additionally a Agent. The Agent reads the data from Amazon S3 and sends it to the web server and stores the prediction at the end back to Amazon S3. The benefit of Batch Transform is that the instances are only used during the "job" and stopped afterwards.
![batch-transform](/static/blog/sagemaker-inference-comparison/batch-transform-v2.png)
**Use batch transform when you:**
- Want to get inferences for an entire dataset and index them to serve inferences in real time
- Don't need a persistent endpoint that applications (for example, web or mobile apps) can call to get inferences
- Don't need the subsecond latency that SageMaker hosted endpoints provide
```python
from sagemaker.huggingface import HuggingFaceModel
from sagemaker.s3 import S3Uploader,s3_path_join
dataset_jsonl_file="./tweet_data.jsonl"
# uploads a given file to S3.
input_s3_path = s3_path_join("s3://",sagemaker_session_bucket,"london/batch_transform/input")
output_s3_path = s3_path_join("s3://",sagemaker_session_bucket,"london/batch_transform/output")
s3_file_uri = S3Uploader.upload(dataset_jsonl_file,input_s3_path)
print(f"{dataset_jsonl_file} uploaded to {s3_file_uri}")
# Hub Model configuration. https://huggingface.co/models
hub = {
'HF_MODEL_ID':'cardiffnlp/twitter-roberta-base-sentiment',
'HF_TASK':'text-classification'
}
# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
env=hub, # configuration for loading model from Hub
role=role, # iam role with permissions to create an Endpoint
transformers_version="4.17", # transformers version used
pytorch_version="1.10", # pytorch version used
py_version='py38', # python version used
)
# create Transformer to run our batch job
batch_job = huggingface_model.transformer(
instance_count=1, # number of instances used for running the batch job
instance_type='ml.m5.xlarge',# instance type for the batch job
output_path=output_s3_path, # we are using the same s3 path to save the output with the input
strategy='SingleRecord') # How we are sending the "requests" to the endpoint
# starts batch transform job and uses s3 data as input
batch_job.transform(
data=s3_file_uri, # preprocessed file location on s3
content_type='application/json',# mime-type of the file
split_type='Line') # how the datapoints are split, here lines since it is `.jsonl`
```
## 3. Deploy a Hugging Face Transformers for Asynchronous Inference.
Detailed Notebook: [async_inference_hf_hub](https://github.com/huggingface/notebooks/blob/main/sagemaker/16_async_inference_hf_hub/sagemaker-notebook.ipynb)
### What is Amazon SageMaker Asynchronous Inference?
Amazon SageMaker Asynchronous Inference is a new capability in SageMaker that queues incoming requests and processes them asynchronously. Compared to [Batch Transform](https://docs.aws.amazon.com/sagemaker/latest/dg/batch-transform.html) [Asynchronous Inference](https://docs.aws.amazon.com/sagemaker/latest/dg/async-inference.html) provides immediate access to the results of the inference job rather than waiting for the job to complete.
![async-inference](../../imgs/async-inference.png)
**Whats the difference between batch transform & real-time inference:**
- request will be uploaded to Amazon S3 and the Amazon S3 URI is passed in the request
- are always up and running but can scale to zero to save costs
- responses are also uploaded to Amazon S3 again.
- you can create a Amazon SNS topic to recieve notifications when predictions are finished
```python
from sagemaker.huggingface.model import HuggingFaceModel
from sagemaker.async_inference.async_inference_config import AsyncInferenceConfig
from sagemaker.s3 import s3_path_join
# Hub Model configuration. <https://huggingface.co/models>
hub = {
'HF_MODEL_ID':'distilbert-base-uncased-finetuned-sst-2-english',
'HF_TASK':'text-classification'
}
# create Hugging Face Model Class
huggingface_model_async = HuggingFaceModel(
env=hub, # configuration for loading model from Hub
role=role, # iam role with permissions to create an Endpoint
transformers_version="4.17", # transformers version used
pytorch_version="1.10", # pytorch version used
py_version='py38', # python version used
)
# create async endpoint configuration
async_config = AsyncInferenceConfig(
output_path=s3_path_join("s3://",sagemaker_session_bucket,"async_inference/output") , # Where our results will be stored
# notification_config={
# "SuccessTopic": "arn:aws:sns:us-east-2:123456789012:MyTopic",
# "ErrorTopic": "arn:aws:sns:us-east-2:123456789012:MyTopic",
# }, # Notification configuration
)
# deploy the endpoint endpoint
async_predictor = huggingface_model_async.deploy(
initial_instance_count=1,
instance_type="ml.g4dn.xlarge",
async_inference_config=async_config
)
```
The `predict()` will upload our `data` to Amazon S3 and run inference against it. Since we are using `predict` it will block until the inference is complete.
```python
data = {
"inputs": [
"it 's a charming and often affecting journey .",
"it 's slow -- very , very slow",
"the mesmerizing performances of the leads keep the film grounded and keep the audience riveted .",
"the emotions are raw and will strike a nerve with anyone who 's ever had family trauma ."
]
}
res = async_predictor.predict(data=data)
print(res)
# [{'label': 'POSITIVE', 'score': 0.9998838901519775}, {'label': 'NEGATIVE', 'score': 0.999727189540863}, {'label': 'POSITIVE', 'score': 0.9998838901519775}, {'label': 'POSITIVE', 'score': 0.9994854927062988}]
```
We can easily delete the endpoint again with the following command:
```python
# delete endpoint
async_predictor.delete_model()
async_predictor.delete_endpoint()
```
## 4. Deploy a Hugging Face Transformers for Serverless Inference.
Detailed Notebook: [serverless_inference](https://github.com/huggingface/notebooks/blob/main/sagemaker/19_serverless_inference/sagemaker-notebook.ipynb)
### What is Amazon SageMaker Serverless Inference?
[Amazon SageMaker Serverless Inference](https://docs.aws.amazon.com/sagemaker/latest/dg/serverless-endpoints.html) is a purpose-built inference option that makes it easy for you to deploy and scale ML models. Serverless Inference is ideal for workloads which have idle periods between traffic spurts and can tolerate cold starts. Serverless endpoints automatically launch compute resources and scale them in and out depending on traffic, eliminating the need to choose instance types or manage scaling policies. This takes away the undifferentiated heavy lifting of selecting and managing servers. Serverless Inference integrates with AWS Lambda to offer you high availability, built-in fault tolerance and automatic scaling.
![serverless](/static/blog/sagemaker-inference-comparison/serverless.png)
**Use Severless Inference when you:**
- Want to get started quickly in a cost-effective way
- Don't need the subsecond latency that SageMaker hosted endpoints provide
- proofs-of-concept where cold starts or scalability is not mission-critical
```python
from sagemaker.huggingface.model import HuggingFaceModel
from sagemaker.serverless import ServerlessInferenceConfig
from sagemaker.serializers import DataSerializer
# Hub Model configuration. <https://huggingface.co/models>
hub = {
'HF_MODEL_ID':'facebook/wav2vec2-base-960h',
'HF_TASK':'automatic-speech-recognition',
}
# create Hugging Face Model Class
huggingface_model_sls = HuggingFaceModel(
env=hub, # configuration for loading model from Hub
role=role, # iam role with permissions to create an Endpoint
transformers_version="4.17", # transformers version used
pytorch_version="1.10", # pytorch version used
py_version='py38', # python version used
)
# Specify MemorySizeInMB and MaxConcurrency in the serverless config object
serverless_config = ServerlessInferenceConfig(
memory_size_in_mb=4096, max_concurrency=10,
)
# create a serializer for the data
audio_serializer = DataSerializer(content_type='audio/x-audio') # using x-audio to support multiple audio formats
# deploy the endpoint endpoint
predictor_sls = huggingface_model_sls.deploy(
serverless_inference_config=serverless_config,
serializer=audio_serializer, # serializer for our audio data.
)
```
```python
!wget https://cdn-media.huggingface.co/speech_samples/sample1.flac
audio_path = "sample1.flac"
res = predictor_sls.predict(data=audio_path)
print(res)
# {'text': "GOING ALONG SLUSHY COUNTRY ROADS AND SPEAKING TO DAMP AUDIENCES IN DRAUGHTY SCHOOL ROOMS DAY AFTER DAY FOR A FORTNIGHT HE'LL HAVE TO PUT IN AN APPEARANCE AT SOME PLACE OF WORSHIP ON SUNDAY MORNING AND HE CAN COME TO US IMMEDIATELY AFTERWARDS"}
```
We can easily delete the endpoint again with the following command:
```python
# delete endpoint
predictor_sls.delete_model()
predictor_sls.delete_endpoint()
```
## Conclusion
Every current available inference option has a good use case and allows companies to optimize their machine learning workloads in the best possible way. Not only that with the addition of SageMaker Serverless companies can now quickly built cost-effective proof-of-concepts and move them after success to real-time endpoints by changing 1 line of code.
## Furthermore, this article has shown how easy it is to get started with Hugging Face Transformers on Amazon Sagemaker and how you can integrate state-of-the-art machine learning into existing applications.
Thanks for reading! If you have any questions, feel free to contact me, through [Github](https://github.com/huggingface/transformers), or on the [forum](https://discuss.huggingface.co/c/sagemaker/17). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). |
End of preview. Expand
in Dataset Viewer.
- Downloads last month
- 2