{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "colab": {
      "name": "09.deploy-BERT-with-FastAPI.ipynb",
      "provenance": [],
      "collapsed_sections": []
    },
    "kernelspec": {
      "name": "python3",
      "display_name": "Python 3"
    }
  },
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "YpI39wyEYQ4y",
        "colab_type": "text"
      },
      "source": [
        "# Deploy BERT for Sentiment Analysis with Transformers by Hugging Face and FastAPI\n",
        "\n",
        "> TL;DR Learn how to create a REST API for Sentiment Analysis using a pre-trained BERT model\n",
        "\n",
        "- [Read the tutorial](https://www.curiousily.com/posts/deploy-bert-for-sentiment-analysis-as-rest-api-using-pytorch-transformers-by-hugging-face-and-fastapi/)\n",
        "- [Run the notebook in your browser (Google Colab)](https://colab.research.google.com/drive/154jf65arX4cHGaGXl2_kJ1DT8FmF4Lhf)\n",
        "- [Project on GitHub](https://github.com/curiousily/Deploy-BERT-for-Sentiment-Analysis-with-FastAPI)\n",
        "- [`Getting Things Done with Pytorch` on GitHub](https://github.com/curiousily/Getting-Things-Done-with-Pytorch)\n",
        "\n",
        "In this tutorial, you'll learn how to deploy a pre-trained BERT model as a REST API using FastAPI. Here are the steps:\n",
        "\n",
        "- Initialize a project using Pipenv\n",
        "- Create a project skeleton\n",
        "- Add the pre-trained model and create an interface to abstract the inference logic\n",
        "- Update the request handler function to return predictions using the model\n",
        "- Start the server and send a test request"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Am2k21g6xOTH",
        "colab_type": "text"
      },
      "source": [
        "\n",
        "## Project setup\n",
        "\n",
        "We'll manage our dependencies using [Pipenv](https://pipenv.pypa.io/en/latest/). Here's the complete Pipfile:\n",
        "\n",
        "```python\n",
        "[[source]]\n",
        "name = \"pypi\"\n",
        "url = \"https://pypi.org/simple\"\n",
        "verify_ssl = true\n",
        "\n",
        "[dev-packages]\n",
        "black = \"==19.10b0\"\n",
        "isort = \"*\"\n",
        "flake8 = \"*\"\n",
        "gdown = \"*\"\n",
        "\n",
        "[packages]\n",
        "fastapi = \"*\"\n",
        "uvicorn = \"*\"\n",
        "pydantic = \"*\"\n",
        "torch = \"*\"\n",
        "transformers = \"*\"\n",
        "\n",
        "[requires]\n",
        "python_version = \"3.8\"\n",
        "\n",
        "[pipenv]\n",
        "allow_prereleases = true\n",
        "```\n",
        "\n",
        "The backbone of our REST API will be:\n",
        "- [FastAPI](https://fastapi.tiangolo.com/) - lets you easily set up a REST API (some say it might be fast, too)\n",
        "- [Uvicorn](https://www.uvicorn.org/) - server that lets you do async programming with Python (pretty cool)\n",
        "- [Pydantic](https://pydantic-docs.helpmanual.io/) - data validation by introducing types for our request and response data.\n",
        "\n",
        "Some tools will help us write some better code (thanks to [Momchil Hardalov](https://github.com/mhardalov) for the configs):\n",
        "\n",
        "- [Black](https://black.readthedocs.io/en/stable/) - code formatting\n",
        "- [isort](https://timothycrosley.github.io/isort/) - imports sorting\n",
        "- [flake8](https://flake8.pycqa.org/en/latest/) - check for code style (PEP 8) compliance\n",
        "\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "TigJHTXUT1mG",
        "colab_type": "text"
      },
      "source": [
        "\n",
        "## Building a skeleton REST API\n",
        "\n",
        "Let's start by creating a skeleton structure for our project. Your directory should look like this:\n",
        "\n",
        "```bash\n",
        ".\n",
        "├── Pipfile\n",
        "├── Pipfile.lock\n",
        "└── sentiment_analyzer\n",
        "    ├── api.py\n",
        "```\n",
        "\n",
        "We'll start by creating a dummy/stubbed response to test that everything is working end-to-end. Here are the contents of `api.py`:\n",
        "\n",
        "```python\n",
        "from typing import Dict\n",
        "\n",
        "from fastapi import Depends, FastAPI\n",
        "from pydantic import BaseModel\n",
        "\n",
        "app = FastAPI()\n",
        "\n",
        "\n",
        "class SentimentRequest(BaseModel):\n",
        "    text: str\n",
        "\n",
        "\n",
        "class SentimentResponse(BaseModel):\n",
        "\n",
        "    probabilities: Dict[str, float]\n",
        "    sentiment: str\n",
        "    confidence: float\n",
        "\n",
        "\n",
        "@app.post(\"/predict\", response_model=SentimentResponse)\n",
        "def predict(request: SentimentRequest):\n",
        "    return SentimentResponse(\n",
        "        sentiment=\"positive\",\n",
        "        confidence=0.98,\n",
        "        probabilities=dict(negative=0.005, neutral=0.015, positive=0.98)\n",
        "    )\n",
        "```\n",
        "\n",
        "Our API expects a text - the review for sentiment analysis. The response contains the sentiment, confidence (softmax output for the sentiment) and all probabilities for each sentiment.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "UXkSZyeGGTuu",
        "colab_type": "text"
      },
      "source": [
        "## Adding our model\n",
        "\n",
        "Here's the file structure of the complete project:\n",
        "\n",
        "```bash\n",
        ".\n",
        "├── assets\n",
        "│   └── model_state_dict.bin\n",
        "├── bin\n",
        "│   └── download_model\n",
        "├── config.json\n",
        "├── Pipfile\n",
        "├── Pipfile.lock\n",
        "└── sentiment_analyzer\n",
        "    ├── api.py\n",
        "    ├── classifier\n",
        "    │   ├── model.py\n",
        "    │   └── sentiment_classifier.py\n",
        "```\n",
        "\n",
        "We'll need the pre-trained model. We'll write the `download_model` script for that:\n",
        "\n",
        "```python\n",
        "#!/usr/bin/env python\n",
        "import gdown\n",
        "\n",
        "gdown.download(\n",
        "    \"https://drive.google.com/uc?id=1V8itWtowCYnb2Bc9KlK9SxGff9WwmogA\",\n",
        "    \"assets/model_state_dict.bin\",\n",
        ")\n",
        "```\n",
        "\n",
        "The model can be downloaded from my Google Drive. Let's get it:\n",
        "\n",
        "```bash\n",
        "python bin/download_model\n",
        "```\n",
        "\n",
        "Our pre-trained model is stored as a PyTorch state dict. We need to load it and use it to predict the text sentiment. \n",
        "\n",
        "Let's start with the config file `config.json`:\n",
        "\n",
        "```javascript\n",
        "{\n",
        "    \"BERT_MODEL\": \"bert-base-cased\",\n",
        "    \"PRE_TRAINED_MODEL\": \"assets/model_state_dict.bin\",\n",
        "    \"CLASS_NAMES\": [\n",
        "        \"negative\",\n",
        "        \"neutral\",\n",
        "        \"positive\"\n",
        "    ],\n",
        "    \"MAX_SEQUENCE_LEN\": 160\n",
        "}\n",
        "```\n",
        "\n",
        "Next, we'll define the `sentiment_classifier.py`:\n",
        "\n",
        "```python\n",
        "import json\n",
        "\n",
        "from torch import nn\n",
        "from transformers import BertModel\n",
        "\n",
        "with open(\"config.json\") as json_file:\n",
        "    config = json.load(json_file)\n",
        "\n",
        "\n",
        "class SentimentClassifier(nn.Module):\n",
        "    def __init__(self, n_classes):\n",
        "        super(SentimentClassifier, self).__init__()\n",
        "        self.bert = BertModel.from_pretrained(config[\"BERT_MODEL\"])\n",
        "        self.drop = nn.Dropout(p=0.3)\n",
        "        self.out = nn.Linear(self.bert.config.hidden_size, n_classes)\n",
        "\n",
        "    def forward(self, input_ids, attention_mask):\n",
        "        _, pooled_output = self.bert(input_ids=input_ids, attention_mask=attention_mask)\n",
        "        output = self.drop(pooled_output)\n",
        "        return self.out(output)\n",
        "```\n",
        "\n",
        "This is the same model we've used for training. It just uses the config file.\n",
        "\n",
        "Recall that BERT requires some special text preprocessing. We need a place to use the tokenizer from Hugging Face. We also need to do some massaging of the model outputs to convert them to our API response format.\n",
        "\n",
        "The `Model` provides a nice abstraction (a Facade) to our classifier. It exposes a single `predict()` method and should be pretty generalizable if you want to use the same project structure as a template for your next deployment. The `model.py` file:\n",
        "\n",
        "```python\n",
        "import json\n",
        "\n",
        "import torch\n",
        "import torch.nn.functional as F\n",
        "from transformers import BertTokenizer\n",
        "\n",
        "from .sentiment_classifier import SentimentClassifier\n",
        "\n",
        "with open(\"config.json\") as json_file:\n",
        "    config = json.load(json_file)\n",
        "\n",
        "\n",
        "class Model:\n",
        "    def __init__(self):\n",
        "\n",
        "        self.device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\n",
        "\n",
        "        self.tokenizer = BertTokenizer.from_pretrained(config[\"BERT_MODEL\"])\n",
        "\n",
        "        classifier = SentimentClassifier(len(config[\"CLASS_NAMES\"]))\n",
        "        classifier.load_state_dict(\n",
        "            torch.load(config[\"PRE_TRAINED_MODEL\"], map_location=self.device)\n",
        "        )\n",
        "        classifier = classifier.eval()\n",
        "        self.classifier = classifier.to(self.device)\n",
        "\n",
        "    def predict(self, text):\n",
        "        encoded_text = self.tokenizer.encode_plus(\n",
        "            text,\n",
        "            max_length=config[\"MAX_SEQUENCE_LEN\"],\n",
        "            add_special_tokens=True,\n",
        "            return_token_type_ids=False,\n",
        "            pad_to_max_length=True,\n",
        "            return_attention_mask=True,\n",
        "            return_tensors=\"pt\",\n",
        "        )\n",
        "        input_ids = encoded_text[\"input_ids\"].to(self.device)\n",
        "        attention_mask = encoded_text[\"attention_mask\"].to(self.device)\n",
        "\n",
        "        with torch.no_grad():\n",
        "            probabilities = F.softmax(self.classifier(input_ids, attention_mask), dim=1)\n",
        "        confidence, predicted_class = torch.max(probabilities, dim=1)\n",
        "        predicted_class = predicted_class.cpu().item()\n",
        "        probabilities = probabilities.flatten().cpu().numpy().tolist()\n",
        "        return (\n",
        "            config[\"CLASS_NAMES\"][predicted_class],\n",
        "            confidence,\n",
        "            dict(zip(config[\"CLASS_NAMES\"], probabilities)),\n",
        "        )\n",
        "\n",
        "\n",
        "model = Model()\n",
        "\n",
        "\n",
        "def get_model():\n",
        "    return model\n",
        "```\n",
        "\n",
        "We'll do the inference on the GPU, if one is available. We return the name of the predicted sentiment, the confidence, and the probabilities for each sentiment.\n",
        "\n",
        "But why don't we define all that logic in our request handler function? For this tutorial, this is an example of overengeneering. But in the real world, when you start testing your implementation, this will be such a nice bonus.\n",
        "\n",
        "You see, mixing everything in the request handler logic will result in countless sleepless nights. When shit hits the fan (and it will) you'll wonder if your REST or model code is wrong. This way allows you to test them, separately.\n",
        "\n",
        "The `get_model()` function ensures that we have a single instance of our Model (Singleton). We'll use it in our API handler."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "sIWsyweFRfqM",
        "colab_type": "text"
      },
      "source": [
        "## Putting everything together\n",
        "\n",
        "Our request handler needs access to the model to return a prediction. We'll use the [Dependency Injection framework](https://fastapi.tiangolo.com/tutorial/dependencies/) provided by FastAPI to inject our model. Here's the new `predict` function:\n",
        "\n",
        "```python\n",
        "@app.post(\"/predict\", response_model=SentimentResponse)\n",
        "def predict(request: SentimentRequest, model: Model = Depends(get_model)):\n",
        "    sentiment, confidence, probabilities = model.predict(request.text)\n",
        "    return SentimentResponse(\n",
        "        sentiment=sentiment, confidence=confidence, probabilities=probabilities\n",
        "    )\n",
        "```\n",
        "\n",
        "The model gets injected by `Depends` and our Singleton function `get_model`. You can really appreciate the power of abstraction by looking at this!\n",
        "\n",
        "But does it work?"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ZQXW21Z6GV8Y",
        "colab_type": "text"
      },
      "source": [
        "## Testing the API\n",
        "\n",
        "Let's fire up the server:\n",
        "\n",
        "```bash\n",
        "uvicorn sentiment_analyzer.api:app\n",
        "```\n",
        "\n",
        "This should take a couple of seconds to load everything and start the HTTP server.\n",
        "\n",
        "```bash\n",
        "http POST http://localhost:8000/predict text=\"This app is a total waste of time!\"\n",
        "```\n",
        "\n",
        "Here's the response:\n",
        "\n",
        "```javascript\n",
        "{\n",
        "    \"confidence\": 0.999885082244873,\n",
        "    \"probabilities\": {\n",
        "        \"negative\": 0.999885082244873,\n",
        "        \"neutral\": 8.876612992025912e-05,\n",
        "        \"positive\": 2.614063305372838e-05\n",
        "    },\n",
        "    \"sentiment\": \"negative\"\n",
        "}\n",
        "```\n",
        "\n",
        "Let's try with a positive one:\n",
        "\n",
        "```bash\n",
        "http POST http://localhost:8000/predict text=\"OMG. I love how easy it is to stick to my schedule. Would recommend to everyone!\"\n",
        "```\n",
        "\n",
        "```javascript\n",
        "{\n",
        "    \"confidence\": 0.999932050704956,\n",
        "    \"probabilities\": {\n",
        "        \"negative\": 1.834999602579046e-05,\n",
        "        \"neutral\": 4.956663542543538e-05,\n",
        "        \"positive\": 0.999932050704956\n",
        "    },\n",
        "    \"sentiment\": \"positive\"\n",
        "}\n",
        "```\n",
        "\n",
        "Both results are on point. Feel free to tryout with some real reviews from the Play Store."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "WL-iZLLBinoc",
        "colab_type": "text"
      },
      "source": [
        "## Summary\n",
        "\n",
        "You should now be a proud owner of ready to deploy (kind of) Sentiment Analysis REST API using BERT. Of course, you're missing lots of stuff to be production-ready - logging, monitoring, alerting, containerization, and much more. But hey, you did good!\n",
        "\n",
        "- [Read the tutorial](https://www.curiousily.com/posts/deploy-bert-for-sentiment-analysis-as-rest-api-using-pytorch-transformers-by-hugging-face-and-fastapi/)\n",
        "- [Run the notebook in your browser (Google Colab)](https://colab.research.google.com/drive/154jf65arX4cHGaGXl2_kJ1DT8FmF4Lhf)\n",
        "- [Project on GitHub](https://github.com/curiousily/Deploy-BERT-for-Sentiment-Analysis-with-FastAPI)\n",
        "- [`Getting Things Done with Pytorch` on GitHub](https://github.com/curiousily/Getting-Things-Done-with-Pytorch)\n",
        "\n",
        "You learned how to:\n",
        "\n",
        "- Initialize a project using Pipenv\n",
        "- Create a project skeleton\n",
        "- Add the pre-trained model and create an interface to abstract the inference logic\n",
        "- Update the request handler function to return predictions using the model\n",
        "- Start the server and send a test request\n",
        "\n",
        "Go on then, deploy and make your users happy!"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "OIbhtk11xamy",
        "colab_type": "text"
      },
      "source": [
        "## References\n",
        "\n",
        "- [FastAPI Homepage](https://fastapi.tiangolo.com/)\n",
        "- [fastAPI ML quickstart](https://github.com/cosmic-cortex/fastAPI-ML-quickstart)"
      ]
    }
  ]
}