{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "colab": {
      "name": "Movie Reviews with bert-for-tf2.ipynb",
      "version": "0.3.2",
      "provenance": [],
      "collapsed_sections": [],
      "toc_visible": true,
      "include_colab_link": true
    },
    "kernelspec": {
      "name": "python3",
      "display_name": "Python 3"
    },
    "accelerator": "GPU"
  },
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "view-in-github",
        "colab_type": "text"
      },
      "source": [
        "<a href=\"https://colab.research.google.com/github/kpe/bert-for-tf2/blob/master/examples/movie_reviews_with_bert_for_tf2_on_gpu.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "xiYrZKaHwV81",
        "colab_type": "text"
      },
      "source": [
        "This is a modification of https://github/google-research/bert/blob/master/predicting_movie_reviews_with_bert_on_tf_hub.ipynb using the Tensorflow 2.0 Keras implementation of BERT from [kpe/bert-for-tf2](https://github.com/kpe/bert-for-tf2) with the original [google-research/bert](https://github.com/google-research/bert) weights.\n"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "j0a4mTk9o1Qg",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Copyright 2019 Google Inc.\n",
        "\n",
        "# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
        "# you may not use this file except in compliance with the License.\n",
        "# You may obtain a copy of the License at\n",
        "\n",
        "#     http://www.apache.org/licenses/LICENSE-2.0\n",
        "\n",
        "# Unless required by applicable law or agreed to in writing, software\n",
        "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
        "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
        "# See the License for the specific language governing permissions and\n",
        "# limitations under the License."
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "dCpvgG0vwXAZ",
        "colab_type": "text"
      },
      "source": [
        "# Predicting Movie Review Sentiment with [kpe/bert-for-tf2](https://github.com/kpe/bert-for-tf2)\n",
        "\n",
        "First install some prerequisites:"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "qFI2_B8ffipb",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "!pip install tqdm  >> /dev/null"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "hsZvic2YxnTz",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "import os\n",
        "import math\n",
        "import datetime\n",
        "\n",
        "from tqdm import tqdm\n",
        "\n",
        "import pandas as pd\n",
        "import numpy as np\n",
        "\n",
        "import tensorflow as tf\n"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Evlk1N78HIXM",
        "colab_type": "code",
        "outputId": "9ada0ad2-3297-414d-b7bb-748063302382",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 34
        }
      },
      "source": [
        "tf.__version__"
      ],
      "execution_count": 0,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "'1.14.0'"
            ]
          },
          "metadata": {
            "tags": []
          },
          "execution_count": 4
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "TAdrQqEccIva",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "if tf.__version__.startswith(\"1.\"):\n",
        "  tf.enable_eager_execution()\n"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "cp5wfXDx5SPH",
        "colab_type": "text"
      },
      "source": [
        "In addition to the standard libraries we imported above, we'll need to install the [bert-for-tf2](https://github.com/kpe/bert-for-tf2) python package, and do the imports required for loading the pre-trained weights and tokenizing the input text. "
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "jviywGyWyKsA",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "!pip install bert-for-tf2 >> /dev/null"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "ZtI7cKWDbUVc",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "import bert\n",
        "from bert import BertModelLayer\n",
        "from bert.loader import StockBertConfig, map_stock_config_to_params, load_stock_weights\n",
        "from bert.tokenization.bert_tokenization import FullTokenizer"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "pmFYvkylMwXn",
        "colab_type": "text"
      },
      "source": [
        "#Data"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "MC_w8SRqN0fr",
        "colab_type": "text"
      },
      "source": [
        "First, let's download the dataset, hosted by Stanford. The code below, which downloads, extracts, and imports the IMDB Large Movie Review Dataset, is borrowed from [this Tensorflow tutorial](https://www.tensorflow.org/hub/tutorials/text_classification_with_tf_hub)."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "fom_ff20gyy6",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "from tensorflow import keras\n",
        "import os\n",
        "import re\n",
        "\n",
        "# Load all files from a directory in a DataFrame.\n",
        "def load_directory_data(directory):\n",
        "  data = {}\n",
        "  data[\"sentence\"] = []\n",
        "  data[\"sentiment\"] = []\n",
        "  for file_path in tqdm(os.listdir(directory), desc=os.path.basename(directory)):\n",
        "    with tf.io.gfile.GFile(os.path.join(directory, file_path), \"r\") as f:\n",
        "      data[\"sentence\"].append(f.read())\n",
        "      data[\"sentiment\"].append(re.match(\"\\d+_(\\d+)\\.txt\", file_path).group(1))\n",
        "  return pd.DataFrame.from_dict(data)\n",
        "\n",
        "# Merge positive and negative examples, add a polarity column and shuffle.\n",
        "def load_dataset(directory):\n",
        "  pos_df = load_directory_data(os.path.join(directory, \"pos\"))\n",
        "  neg_df = load_directory_data(os.path.join(directory, \"neg\"))\n",
        "  pos_df[\"polarity\"] = 1\n",
        "  neg_df[\"polarity\"] = 0\n",
        "  return pd.concat([pos_df, neg_df]).sample(frac=1).reset_index(drop=True)\n",
        "\n",
        "# Download and process the dataset files.\n",
        "def download_and_load_datasets(force_download=False):\n",
        "  dataset = tf.keras.utils.get_file(\n",
        "      fname=\"aclImdb.tar.gz\", \n",
        "      origin=\"http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz\", \n",
        "      extract=True)\n",
        "  \n",
        "  train_df = load_dataset(os.path.join(os.path.dirname(dataset), \n",
        "                                       \"aclImdb\", \"train\"))\n",
        "  test_df = load_dataset(os.path.join(os.path.dirname(dataset), \n",
        "                                      \"aclImdb\", \"test\"))\n",
        "  \n",
        "  return train_df, test_df\n"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "CaE2G_2DdzVg",
        "colab_type": "text"
      },
      "source": [
        "Let's use the `MovieReviewData` class below, to prepare/encode \n",
        "the data for feeding into our BERT model, by:\n",
        "  - tokenizing the text\n",
        "  - trim or pad it to a `max_seq_len` length\n",
        "  - append the special tokens `[CLS]` and `[SEP]`\n",
        "  - convert the string tokens to numerical `ID`s using the original model's token encoding from `vocab.txt`"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "2abfwdn-g135",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "\n",
        "import bert\n",
        "from bert import BertModelLayer\n",
        "from bert.loader import StockBertConfig, map_stock_config_to_params, load_stock_weights\n",
        "from bert.tokenization import FullTokenizer\n",
        "\n",
        "\n",
        "class MovieReviewData:\n",
        "    DATA_COLUMN = \"sentence\"\n",
        "    LABEL_COLUMN = \"polarity\"\n",
        "\n",
        "    def __init__(self, tokenizer: FullTokenizer, sample_size=None, max_seq_len=1024):\n",
        "        self.tokenizer = tokenizer\n",
        "        self.sample_size = sample_size\n",
        "        self.max_seq_len = 0\n",
        "        train, test = download_and_load_datasets()\n",
        "        \n",
        "        train, test = map(lambda df: df.reindex(df[MovieReviewData.DATA_COLUMN].str.len().sort_values().index), \n",
        "                          [train, test])\n",
        "                \n",
        "        if sample_size is not None:\n",
        "            assert sample_size % 128 == 0\n",
        "            train, test = train.head(sample_size), test.head(sample_size)\n",
        "            # train, test = map(lambda df: df.sample(sample_size), [train, test])\n",
        "        \n",
        "        ((self.train_x, self.train_y),\n",
        "         (self.test_x, self.test_y)) = map(self._prepare, [train, test])\n",
        "\n",
        "        print(\"max seq_len\", self.max_seq_len)\n",
        "        self.max_seq_len = min(self.max_seq_len, max_seq_len)\n",
        "        ((self.train_x, self.train_x_token_types),\n",
        "         (self.test_x, self.test_x_token_types)) = map(self._pad, \n",
        "                                                       [self.train_x, self.test_x])\n",
        "\n",
        "    def _prepare(self, df):\n",
        "        x, y = [], []\n",
        "        with tqdm(total=df.shape[0], unit_scale=True) as pbar:\n",
        "            for ndx, row in df.iterrows():\n",
        "                text, label = row[MovieReviewData.DATA_COLUMN], row[MovieReviewData.LABEL_COLUMN]\n",
        "                tokens = self.tokenizer.tokenize(text)\n",
        "                tokens = [\"[CLS]\"] + tokens + [\"[SEP]\"]\n",
        "                token_ids = self.tokenizer.convert_tokens_to_ids(tokens)\n",
        "                self.max_seq_len = max(self.max_seq_len, len(token_ids))\n",
        "                x.append(token_ids)\n",
        "                y.append(int(label))\n",
        "                pbar.update()\n",
        "        return np.array(x), np.array(y)\n",
        "\n",
        "    def _pad(self, ids):\n",
        "        x, t = [], []\n",
        "        token_type_ids = [0] * self.max_seq_len\n",
        "        for input_ids in ids:\n",
        "            input_ids = input_ids[:min(len(input_ids), self.max_seq_len - 2)]\n",
        "            input_ids = input_ids + [0] * (self.max_seq_len - len(input_ids))\n",
        "            x.append(np.array(input_ids))\n",
        "            t.append(token_type_ids)\n",
        "        return np.array(x), np.array(t)\n"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "SGL0mEoNFGlP",
        "colab_type": "text"
      },
      "source": [
        "## A tweak\n",
        "\n",
        "Because of a `tf.train.load_checkpoint` limitation requiring list permissions on the google storage bucket, we need to copy the pre-trained BERT weights locally."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "lw_F488eixTV",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "bert_ckpt_dir=\"gs://bert_models/2018_10_18/uncased_L-12_H-768_A-12/\"\n",
        "bert_ckpt_file = bert_ckpt_dir + \"bert_model.ckpt\"\n",
        "bert_config_file = bert_ckpt_dir + \"bert_config.json\""
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "dGFfkWO07cWG",
        "colab_type": "code",
        "outputId": "773f74c0-0f33-4626-929f-cf35c0996353",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 566
        }
      },
      "source": [
        "%%time\n",
        "\n",
        "bert_model_dir=\"2018_10_18\"\n",
        "bert_model_name=\"uncased_L-12_H-768_A-12\"\n",
        "\n",
        "!mkdir -p .model .model/$bert_model_name\n",
        "\n",
        "for fname in [\"bert_config.json\", \"vocab.txt\", \"bert_model.ckpt.meta\", \"bert_model.ckpt.index\", \"bert_model.ckpt.data-00000-of-00001\"]:\n",
        "  cmd = f\"gsutil cp gs://bert_models/{bert_model_dir}/{bert_model_name}/{fname} .model/{bert_model_name}\"\n",
        "  !$cmd\n",
        "\n",
        "!ls -la .model .model/$bert_model_name"
      ],
      "execution_count": 0,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "Copying gs://bert_models/2018_10_18/uncased_L-12_H-768_A-12/bert_config.json...\n",
            "/ [0 files][    0.0 B/  313.0 B]                                                \r/ [1 files][  313.0 B/  313.0 B]                                                \r\n",
            "Operation completed over 1 objects/313.0 B.                                      \n",
            "Copying gs://bert_models/2018_10_18/uncased_L-12_H-768_A-12/vocab.txt...\n",
            "/ [1 files][226.1 KiB/226.1 KiB]                                                \n",
            "Operation completed over 1 objects/226.1 KiB.                                    \n",
            "Copying gs://bert_models/2018_10_18/uncased_L-12_H-768_A-12/bert_model.ckpt.meta...\n",
            "/ [1 files][883.1 KiB/883.1 KiB]                                                \n",
            "Operation completed over 1 objects/883.1 KiB.                                    \n",
            "Copying gs://bert_models/2018_10_18/uncased_L-12_H-768_A-12/bert_model.ckpt.index...\n",
            "/ [1 files][  8.3 KiB/  8.3 KiB]                                                \n",
            "Operation completed over 1 objects/8.3 KiB.                                      \n",
            "Copying gs://bert_models/2018_10_18/uncased_L-12_H-768_A-12/bert_model.ckpt.data-00000-of-00001...\n",
            "| [1 files][420.0 MiB/420.0 MiB]                                                \n",
            "Operation completed over 1 objects/420.0 MiB.                                    \n",
            ".model:\n",
            "total 16\n",
            "drwxr-xr-x 3 root root 4096 Jul 23 08:12 .\n",
            "drwxr-xr-x 1 root root 4096 Jul 23 08:42 ..\n",
            "drwxr-xr-x 2 root root 4096 Jul 23 08:46 uncased_L-12_H-768_A-12\n",
            "\n",
            ".model/uncased_L-12_H-768_A-12:\n",
            "total 431244\n",
            "drwxr-xr-x 2 root root      4096 Jul 23 08:46 .\n",
            "drwxr-xr-x 3 root root      4096 Jul 23 08:12 ..\n",
            "-rw-r--r-- 1 root root       313 Jul 23 08:45 bert_config.json\n",
            "-rw-r--r-- 1 root root 440425712 Jul 23 08:46 bert_model.ckpt.data-00000-of-00001\n",
            "-rw-r--r-- 1 root root      8528 Jul 23 08:45 bert_model.ckpt.index\n",
            "-rw-r--r-- 1 root root    904243 Jul 23 08:45 bert_model.ckpt.meta\n",
            "-rw-r--r-- 1 root root    231508 Jul 23 08:45 vocab.txt\n",
            "CPU times: user 183 ms, sys: 90.6 ms, total: 274 ms\n",
            "Wall time: 17.7 s\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "049feT8dFprc",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "bert_ckpt_dir    = os.path.join(\".model/\",bert_model_name)\n",
        "bert_ckpt_file   = os.path.join(bert_ckpt_dir, \"bert_model.ckpt\")\n",
        "bert_config_file = os.path.join(bert_ckpt_dir, \"bert_config.json\")"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "G4xPTleh2X2b",
        "colab_type": "text"
      },
      "source": [
        "# Preparing the Data\n",
        "\n",
        "Now let's fetch and prepare the data by taking the first `max_seq_len` tokenens after tokenizing with the BERT tokenizer, und use `sample_size` examples for both training and testing."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "XA8WHJgzhIZf",
        "colab_type": "text"
      },
      "source": [
        "To keep training fast, we'll take a sample of about 2500 train and test examples, respectively, and use the first 128 tokens only (transformers memory and computation requirements scale quadraticly with the sequence length - so with a TPU you might use `max_seq_len=512`, but on a GPU this would be too slow, and you will have to use a very small `batch_size`s to fit the model into the GPU memory)."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "kF_3KhGQ0GTc",
        "colab_type": "code",
        "outputId": "fd993d23-61ce-4aae-c992-5b5591123198",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 171
        }
      },
      "source": [
        "%%time\n",
        "\n",
        "tokenizer = FullTokenizer(vocab_file=os.path.join(bert_ckpt_dir, \"vocab.txt\"))\n",
        "data = MovieReviewData(tokenizer, \n",
        "                       sample_size=10*128*2,#5000, \n",
        "                       max_seq_len=128)"
      ],
      "execution_count": 0,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "pos: 100%|██████████| 12500/12500 [00:01<00:00, 7925.65it/s]\n",
            "neg: 100%|██████████| 12500/12500 [00:01<00:00, 7805.72it/s]\n",
            "pos: 100%|██████████| 12500/12500 [00:01<00:00, 7829.00it/s]\n",
            "neg: 100%|██████████| 12500/12500 [00:01<00:00, 8075.70it/s]\n",
            "100%|██████████| 2.56k/2.56k [00:03<00:00, 666it/s]\n",
            "100%|██████████| 2.56k/2.56k [00:03<00:00, 707it/s]\n"
          ],
          "name": "stderr"
        },
        {
          "output_type": "stream",
          "text": [
            "max seq_len 178\n",
            "CPU times: user 29.3 s, sys: 12.2 s, total: 41.5 s\n",
            "Wall time: 41.9 s\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "prRQM8pDi8xI",
        "colab_type": "code",
        "outputId": "b98433d4-c7e6-4bb5-af54-941f52e0b8e7",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 103
        }
      },
      "source": [
        "print(\"            train_x\", data.train_x.shape)\n",
        "print(\"train_x_token_types\", data.train_x_token_types.shape)\n",
        "print(\"            train_y\", data.train_y.shape)\n",
        "\n",
        "print(\"             test_x\", data.test_x.shape)\n",
        "\n",
        "print(\"        max_seq_len\", data.max_seq_len)"
      ],
      "execution_count": 0,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "            train_x (2560, 128)\n",
            "train_x_token_types (2560, 128)\n",
            "            train_y (2560,)\n",
            "             test_x (2560, 128)\n",
            "        max_seq_len 128\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "sfRnHSz3iSXz",
        "colab_type": "text"
      },
      "source": [
        "## Adapter BERT\n",
        "\n",
        "If we decide to use [adapter-BERT](https://arxiv.org/abs/1902.00751) we need some helpers for freezing the original BERT layers."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "IuMOGwFui4it",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "\n",
        "def flatten_layers(root_layer):\n",
        "    if isinstance(root_layer, keras.layers.Layer):\n",
        "        yield root_layer\n",
        "    for layer in root_layer._layers:\n",
        "        for sub_layer in flatten_layers(layer):\n",
        "            yield sub_layer\n",
        "\n",
        "\n",
        "def freeze_bert_layers(l_bert):\n",
        "    \"\"\"\n",
        "    Freezes all but LayerNorm and adapter layers - see arXiv:1902.00751.\n",
        "    \"\"\"\n",
        "    for layer in flatten_layers(l_bert):\n",
        "        if layer.name in [\"LayerNorm\", \"adapter-down\", \"adapter-up\"]:\n",
        "            layer.trainable = True\n",
        "        elif len(layer._layers) == 0:\n",
        "            layer.trainable = False\n",
        "        l_bert.embeddings_layer.trainable = False\n",
        "\n",
        "\n",
        "def create_learning_rate_scheduler(max_learn_rate=5e-5,\n",
        "                                   end_learn_rate=1e-7,\n",
        "                                   warmup_epoch_count=10,\n",
        "                                   total_epoch_count=90):\n",
        "\n",
        "    def lr_scheduler(epoch):\n",
        "        if epoch < warmup_epoch_count:\n",
        "            res = (max_learn_rate/warmup_epoch_count) * (epoch + 1)\n",
        "        else:\n",
        "            res = max_learn_rate*math.exp(math.log(end_learn_rate/max_learn_rate)*(epoch-warmup_epoch_count+1)/(total_epoch_count-warmup_epoch_count+1))\n",
        "        return float(res)\n",
        "    learning_rate_scheduler = tf.keras.callbacks.LearningRateScheduler(lr_scheduler, verbose=1)\n",
        "\n",
        "    return learning_rate_scheduler\n"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ccp5trMwRtmr",
        "colab_type": "text"
      },
      "source": [
        "#Creating a model\n",
        "\n",
        "Now let's create a classification model using [adapter-BERT](https//arxiv.org/abs/1902.00751), which is clever way of reducing the trainable parameter count, by freezing the original BERT weights, and adapting them with two FFN bottlenecks (i.e. `adapter_size` bellow) in every BERT layer.\n",
        "\n",
        "**N.B.** The commented out code below show how to feed a `token_type_ids`/`segment_ids` sequence (which is not needed in our case)."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "6o2a5ZIvRcJq",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "def create_model(max_seq_len, adapter_size=64):\n",
        "  \"\"\"Creates a classification model.\"\"\"\n",
        "\n",
        "  #adapter_size = 64  # see - arXiv:1902.00751\n",
        "\n",
        "  # create the bert layer\n",
        "  with tf.io.gfile.GFile(bert_config_file, \"r\") as reader:\n",
        "      bc = StockBertConfig.from_json_string(reader.read())\n",
        "      bert_params = map_stock_config_to_params(bc)\n",
        "      bert_params.adapter_size = adapter_size\n",
        "      bert = BertModelLayer.from_params(bert_params, name=\"bert\")\n",
        "        \n",
        "  input_ids      = keras.layers.Input(shape=(max_seq_len,), dtype='int32', name=\"input_ids\")\n",
        "  # token_type_ids = keras.layers.Input(shape=(max_seq_len,), dtype='int32', name=\"token_type_ids\")\n",
        "  # output         = bert([input_ids, token_type_ids])\n",
        "  output         = bert(input_ids)\n",
        "\n",
        "  print(\"bert shape\", output.shape)\n",
        "  cls_out = keras.layers.Lambda(lambda seq: seq[:, 0, :])(output)\n",
        "  cls_out = keras.layers.Dropout(0.5)(cls_out)\n",
        "  logits = keras.layers.Dense(units=768, activation=\"tanh\")(cls_out)\n",
        "  logits = keras.layers.Dropout(0.5)(logits)\n",
        "  logits = keras.layers.Dense(units=2, activation=\"softmax\")(logits)\n",
        "\n",
        "  # model = keras.Model(inputs=[input_ids, token_type_ids], outputs=logits)\n",
        "  # model.build(input_shape=[(None, max_seq_len), (None, max_seq_len)])\n",
        "  model = keras.Model(inputs=input_ids, outputs=logits)\n",
        "  model.build(input_shape=(None, max_seq_len))\n",
        "\n",
        "  # load the pre-trained model weights\n",
        "  load_stock_weights(bert, bert_ckpt_file)\n",
        "\n",
        "  # freeze weights if adapter-BERT is used\n",
        "  if adapter_size is not None:\n",
        "      freeze_bert_layers(bert)\n",
        "\n",
        "  model.compile(optimizer=keras.optimizers.Adam(),\n",
        "                loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n",
        "                metrics=[keras.metrics.SparseCategoricalAccuracy(name=\"acc\")])\n",
        "\n",
        "  model.summary()\n",
        "        \n",
        "  return model\n"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "bZnmtDc7HlEm",
        "colab_type": "code",
        "outputId": "fcd96c78-792c-4032-d188-73a9c21ec304",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 517
        }
      },
      "source": [
        "adapter_size = None # use None to fine-tune all of BERT\n",
        "model = create_model(data.max_seq_len, adapter_size=adapter_size)"
      ],
      "execution_count": 0,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "WARNING: Logging before flag parsing goes to stderr.\n",
            "W0723 08:46:53.670279 140084973148032 deprecation.py:323] From /usr/local/lib/python3.6/dist-packages/bert/loader.py:113: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.\n",
            "Instructions for updating:\n",
            "Use standard file APIs to check for files with this prefix.\n"
          ],
          "name": "stderr"
        },
        {
          "output_type": "stream",
          "text": [
            "bert shape (?, 128, 768)\n",
            "Done loading 196 BERT weights from: .model/uncased_L-12_H-768_A-12/bert_model.ckpt into <bert.model.BertModelLayer object at 0x7f67c76a2f28> (prefix:bert)\n",
            "Model: \"model\"\n",
            "_________________________________________________________________\n",
            "Layer (type)                 Output Shape              Param #   \n",
            "=================================================================\n",
            "input_ids (InputLayer)       [(None, 128)]             0         \n",
            "_________________________________________________________________\n",
            "bert (BertModelLayer)        (None, 128, 768)          108890112 \n",
            "_________________________________________________________________\n",
            "lambda (Lambda)              (None, 768)               0         \n",
            "_________________________________________________________________\n",
            "dropout (Dropout)            (None, 768)               0         \n",
            "_________________________________________________________________\n",
            "dense (Dense)                (None, 768)               590592    \n",
            "_________________________________________________________________\n",
            "dropout_1 (Dropout)          (None, 768)               0         \n",
            "_________________________________________________________________\n",
            "dense_1 (Dense)              (None, 2)                 1538      \n",
            "=================================================================\n",
            "Total params: 109,482,242\n",
            "Trainable params: 109,482,242\n",
            "Non-trainable params: 0\n",
            "_________________________________________________________________\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "ZuLOkwonF-9S",
        "colab_type": "code",
        "outputId": "ce1451d4-310c-41f1-ddd3-a2e0841d8528",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 1000
        }
      },
      "source": [
        "%%time\n",
        "\n",
        "log_dir = \".log/movie_reviews/\" + datetime.datetime.now().strftime(\"%Y%m%d-%H%M%s\")\n",
        "tensorboard_callback = keras.callbacks.TensorBoard(log_dir=log_dir)\n",
        "\n",
        "total_epoch_count = 50\n",
        "# model.fit(x=(data.train_x, data.train_x_token_types), y=data.train_y,\n",
        "model.fit(x=data.train_x, y=data.train_y,\n",
        "          validation_split=0.1,\n",
        "          batch_size=48,\n",
        "          shuffle=True,\n",
        "          epochs=total_epoch_count,\n",
        "          callbacks=[create_learning_rate_scheduler(max_learn_rate=1e-5,\n",
        "                                                    end_learn_rate=1e-7,\n",
        "                                                    warmup_epoch_count=20,\n",
        "                                                    total_epoch_count=total_epoch_count),\n",
        "                     keras.callbacks.EarlyStopping(patience=20, restore_best_weights=True),\n",
        "                     tensorboard_callback])\n",
        "\n",
        "model.save_weights('./movie_reviews.h5', overwrite=True)"
      ],
      "execution_count": 18,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "Train on 2304 samples, validate on 256 samples\n"
          ],
          "name": "stdout"
        },
        {
          "output_type": "stream",
          "text": [
            "W0723 08:46:55.925203 140084973148032 deprecation.py:323] From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/math_grad.py:1205: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.\n",
            "Instructions for updating:\n",
            "Use tf.where in 2.0, which has the same broadcast rule as np.where\n"
          ],
          "name": "stderr"
        },
        {
          "output_type": "stream",
          "text": [
            "\n",
            "Epoch 00001: LearningRateScheduler reducing learning rate to 1.5e-06.\n",
            "Epoch 1/50\n",
            "2304/2304 [==============================] - 95s 41ms/sample - loss: 0.7083 - acc: 0.5391 - val_loss: 0.6872 - val_acc: 0.5508\n",
            "\n",
            "Epoch 00002: LearningRateScheduler reducing learning rate to 3e-06.\n",
            "Epoch 2/50\n",
            "2304/2304 [==============================] - 79s 34ms/sample - loss: 0.6695 - acc: 0.5898 - val_loss: 0.6220 - val_acc: 0.6680\n",
            "\n",
            "Epoch 00003: LearningRateScheduler reducing learning rate to 4.5e-06.\n",
            "Epoch 3/50\n",
            "2304/2304 [==============================] - 79s 34ms/sample - loss: 0.5545 - acc: 0.7543 - val_loss: 0.4350 - val_acc: 0.8867\n",
            "\n",
            "Epoch 00004: LearningRateScheduler reducing learning rate to 6e-06.\n",
            "Epoch 4/50\n",
            "2304/2304 [==============================] - 79s 34ms/sample - loss: 0.4176 - acc: 0.8958 - val_loss: 0.4163 - val_acc: 0.8945\n",
            "\n",
            "Epoch 00005: LearningRateScheduler reducing learning rate to 7.5e-06.\n",
            "Epoch 5/50\n",
            "2304/2304 [==============================] - 78s 34ms/sample - loss: 0.3840 - acc: 0.9306 - val_loss: 0.4241 - val_acc: 0.8828\n",
            "\n",
            "Epoch 00006: LearningRateScheduler reducing learning rate to 9e-06.\n",
            "Epoch 6/50\n",
            "2304/2304 [==============================] - 79s 34ms/sample - loss: 0.3714 - acc: 0.9431 - val_loss: 0.4163 - val_acc: 0.8945\n",
            "\n",
            "Epoch 00007: LearningRateScheduler reducing learning rate to 1.0500000000000001e-05.\n",
            "Epoch 7/50\n",
            "2304/2304 [==============================] - 78s 34ms/sample - loss: 0.3602 - acc: 0.9510 - val_loss: 0.4356 - val_acc: 0.8750\n",
            "\n",
            "Epoch 00008: LearningRateScheduler reducing learning rate to 1.2e-05.\n",
            "Epoch 8/50\n",
            "2304/2304 [==============================] - 78s 34ms/sample - loss: 0.3511 - acc: 0.9605 - val_loss: 0.4409 - val_acc: 0.8633\n",
            "\n",
            "Epoch 00009: LearningRateScheduler reducing learning rate to 1.35e-05.\n",
            "Epoch 9/50\n",
            "2304/2304 [==============================] - 79s 34ms/sample - loss: 0.3480 - acc: 0.9648 - val_loss: 0.3955 - val_acc: 0.9219\n",
            "\n",
            "Epoch 00010: LearningRateScheduler reducing learning rate to 1.5e-05.\n",
            "Epoch 10/50\n",
            "2304/2304 [==============================] - 78s 34ms/sample - loss: 0.3370 - acc: 0.9774 - val_loss: 0.4286 - val_acc: 0.8750\n",
            "\n",
            "Epoch 00011: LearningRateScheduler reducing learning rate to 1.65e-05.\n",
            "Epoch 11/50\n",
            "2304/2304 [==============================] - 78s 34ms/sample - loss: 0.3366 - acc: 0.9770 - val_loss: 0.4343 - val_acc: 0.8789\n",
            "\n",
            "Epoch 00012: LearningRateScheduler reducing learning rate to 1.8e-05.\n",
            "Epoch 12/50\n",
            "2304/2304 [==============================] - 78s 34ms/sample - loss: 0.3435 - acc: 0.9696 - val_loss: 0.4332 - val_acc: 0.8750\n",
            "\n",
            "Epoch 00013: LearningRateScheduler reducing learning rate to 1.95e-05.\n",
            "Epoch 13/50\n",
            "2304/2304 [==============================] - 78s 34ms/sample - loss: 0.3387 - acc: 0.9744 - val_loss: 0.4304 - val_acc: 0.8789\n",
            "\n",
            "Epoch 00014: LearningRateScheduler reducing learning rate to 2.1000000000000002e-05.\n",
            "Epoch 14/50\n",
            "2304/2304 [==============================] - 78s 34ms/sample - loss: 0.3367 - acc: 0.9761 - val_loss: 0.4556 - val_acc: 0.8516\n",
            "\n",
            "Epoch 00015: LearningRateScheduler reducing learning rate to 2.25e-05.\n",
            "Epoch 15/50\n",
            "2304/2304 [==============================] - 78s 34ms/sample - loss: 0.3368 - acc: 0.9766 - val_loss: 0.4319 - val_acc: 0.8750\n",
            "\n",
            "Epoch 00016: LearningRateScheduler reducing learning rate to 2.4e-05.\n",
            "Epoch 16/50\n",
            "2304/2304 [==============================] - 78s 34ms/sample - loss: 0.3326 - acc: 0.9809 - val_loss: 0.4286 - val_acc: 0.8828\n",
            "\n",
            "Epoch 00017: LearningRateScheduler reducing learning rate to 2.55e-05.\n",
            "Epoch 17/50\n",
            "2304/2304 [==============================] - 78s 34ms/sample - loss: 0.3323 - acc: 0.9813 - val_loss: 0.4170 - val_acc: 0.8945\n",
            "\n",
            "Epoch 00018: LearningRateScheduler reducing learning rate to 2.7e-05.\n",
            "Epoch 18/50\n",
            "2304/2304 [==============================] - 78s 34ms/sample - loss: 0.3384 - acc: 0.9748 - val_loss: 0.4272 - val_acc: 0.8789\n",
            "\n",
            "Epoch 00019: LearningRateScheduler reducing learning rate to 2.85e-05.\n",
            "Epoch 19/50\n",
            "2304/2304 [==============================] - 78s 34ms/sample - loss: 0.3328 - acc: 0.9800 - val_loss: 0.4257 - val_acc: 0.8867\n",
            "\n",
            "Epoch 00020: LearningRateScheduler reducing learning rate to 3e-05.\n",
            "Epoch 20/50\n",
            "2304/2304 [==============================] - 78s 34ms/sample - loss: 0.3297 - acc: 0.9839 - val_loss: 0.4248 - val_acc: 0.8867\n",
            "\n",
            "Epoch 00021: LearningRateScheduler reducing learning rate to 2.495824924488725e-05.\n",
            "Epoch 21/50\n",
            "2304/2304 [==============================] - 78s 34ms/sample - loss: 0.3289 - acc: 0.9848 - val_loss: 0.4292 - val_acc: 0.8867\n",
            "\n",
            "Epoch 00022: LearningRateScheduler reducing learning rate to 2.076380684566383e-05.\n",
            "Epoch 22/50\n",
            "2304/2304 [==============================] - 78s 34ms/sample - loss: 0.3299 - acc: 0.9835 - val_loss: 0.4250 - val_acc: 0.8867\n",
            "\n",
            "Epoch 00023: LearningRateScheduler reducing learning rate to 1.7274275550892468e-05.\n",
            "Epoch 23/50\n",
            "2304/2304 [==============================] - 78s 34ms/sample - loss: 0.3307 - acc: 0.9818 - val_loss: 0.4201 - val_acc: 0.8906\n",
            "\n",
            "Epoch 00024: LearningRateScheduler reducing learning rate to 1.437118915746787e-05.\n",
            "Epoch 24/50\n",
            "2304/2304 [==============================] - 78s 34ms/sample - loss: 0.3285 - acc: 0.9844 - val_loss: 0.4257 - val_acc: 0.8867\n",
            "\n",
            "Epoch 00025: LearningRateScheduler reducing learning rate to 1.1955990697916808e-05.\n",
            "Epoch 25/50\n",
            "2304/2304 [==============================] - 78s 34ms/sample - loss: 0.3264 - acc: 0.9865 - val_loss: 0.4334 - val_acc: 0.8750\n",
            "\n",
            "Epoch 00026: LearningRateScheduler reducing learning rate to 9.946686526938708e-06.\n",
            "Epoch 26/50\n",
            "2304/2304 [==============================] - 78s 34ms/sample - loss: 0.3278 - acc: 0.9857 - val_loss: 0.4266 - val_acc: 0.8867\n",
            "\n",
            "Epoch 00027: LearningRateScheduler reducing learning rate to 8.275062716669937e-06.\n",
            "Epoch 27/50\n",
            "2304/2304 [==============================] - 78s 34ms/sample - loss: 0.3295 - acc: 0.9835 - val_loss: 0.4161 - val_acc: 0.8945\n",
            "\n",
            "Epoch 00028: LearningRateScheduler reducing learning rate to 6.884369259990735e-06.\n",
            "Epoch 28/50\n",
            "2304/2304 [==============================] - 78s 34ms/sample - loss: 0.3259 - acc: 0.9874 - val_loss: 0.4084 - val_acc: 0.9062\n",
            "\n",
            "Epoch 00029: LearningRateScheduler reducing learning rate to 5.727393462822959e-06.\n",
            "Epoch 29/50\n",
            "2304/2304 [==============================] - 79s 34ms/sample - loss: 0.3255 - acc: 0.9878 - val_loss: 0.4287 - val_acc: 0.8828\n",
            "CPU times: user 22min 42s, sys: 6min 35s, total: 29min 18s\n",
            "Wall time: 38min 34s\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "BSqMu64oHzqy",
        "colab_type": "code",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 120
        },
        "outputId": "95a8284b-b3b2-4a7d-f335-c4ff65f8f920"
      },
      "source": [
        "%%time\n",
        "\n",
        "_, train_acc = model.evaluate(data.train_x, data.train_y)\n",
        "_, test_acc = model.evaluate(data.test_x, data.test_y)\n",
        "\n",
        "print(\"train acc\", train_acc)\n",
        "print(\" test acc\", test_acc)"
      ],
      "execution_count": 19,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "2560/2560 [==============================] - 29s 11ms/sample - loss: 0.3414 - acc: 0.9723\n",
            "2560/2560 [==============================] - 28s 11ms/sample - loss: 0.3904 - acc: 0.9207\n",
            "train acc 0.9722656\n",
            " test acc 0.9207031\n",
            "CPU times: user 36 s, sys: 21.5 s, total: 57.5 s\n",
            "Wall time: 57.1 s\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "xSKDZEnVabnl",
        "colab_type": "text"
      },
      "source": [
        "# Evaluation\n",
        "\n",
        "To evaluate the trained model, let's load the saved weights in a new model instance, and evaluate."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "qCpabQ15WS3U",
        "colab_type": "code",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 531
        },
        "outputId": "220b2b55-dd78-4795-d3b6-d364c8323734"
      },
      "source": [
        "%%time \n",
        "\n",
        "model = create_model(data.max_seq_len, adapter_size=None)\n",
        "model.load_weights(\"movie_reviews.h5\")\n",
        "\n",
        "_, train_acc = model.evaluate(data.train_x, data.train_y)\n",
        "_, test_acc = model.evaluate(data.test_x, data.test_y)\n",
        "\n",
        "print(\"train acc\", train_acc)\n",
        "print(\" test acc\", test_acc)"
      ],
      "execution_count": 26,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "bert shape (?, 128, 768)\n",
            "Done loading 196 BERT weights from: .model/uncased_L-12_H-768_A-12/bert_model.ckpt into <bert.model.BertModelLayer object at 0x7f671451f780> (prefix:bert_1)\n",
            "Model: \"model_1\"\n",
            "_________________________________________________________________\n",
            "Layer (type)                 Output Shape              Param #   \n",
            "=================================================================\n",
            "input_ids (InputLayer)       [(None, 128)]             0         \n",
            "_________________________________________________________________\n",
            "bert (BertModelLayer)        (None, 128, 768)          108890112 \n",
            "_________________________________________________________________\n",
            "lambda_1 (Lambda)            (None, 768)               0         \n",
            "_________________________________________________________________\n",
            "dropout_2 (Dropout)          (None, 768)               0         \n",
            "_________________________________________________________________\n",
            "dense_2 (Dense)              (None, 768)               590592    \n",
            "_________________________________________________________________\n",
            "dropout_3 (Dropout)          (None, 768)               0         \n",
            "_________________________________________________________________\n",
            "dense_3 (Dense)              (None, 2)                 1538      \n",
            "=================================================================\n",
            "Total params: 109,482,242\n",
            "Trainable params: 109,482,242\n",
            "Non-trainable params: 0\n",
            "_________________________________________________________________\n",
            "2560/2560 [==============================] - 30s 12ms/sample - loss: 0.3414 - acc: 0.9723\n",
            "2560/2560 [==============================] - 28s 11ms/sample - loss: 0.3904 - acc: 0.9207\n",
            "train acc 0.9722656\n",
            " test acc 0.9207031\n",
            "CPU times: user 45 s, sys: 23 s, total: 1min 7s\n",
            "Wall time: 1min 9s\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "5uzdOFQ5awM1",
        "colab_type": "text"
      },
      "source": [
        "# Prediction\n",
        "\n",
        "For prediction, we need to prepare the input text the same way as we did for training - tokenize, adding the special `[CLS]` and `[SEP]` token at begin and end of the token sequence, and pad to match the model input shape."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "m7dAAoCuW1xj",
        "colab_type": "code",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 171
        },
        "outputId": "15f4a565-1659-4c7a-a21a-5a1322389746"
      },
      "source": [
        "pred_sentences = [\n",
        "  \"That movie was absolutely awful\",\n",
        "  \"The acting was a bit lacking\",\n",
        "  \"The film was creative and surprising\",\n",
        "  \"Absolutely fantastic!\"\n",
        "]\n",
        "\n",
        "tokenizer = FullTokenizer(vocab_file=os.path.join(bert_ckpt_dir, \"vocab.txt\"))\n",
        "pred_tokens    = map(tokenizer.tokenize, pred_sentences)\n",
        "pred_tokens    = map(lambda tok: [\"[CLS]\"] + tok + [\"[SEP]\"], pred_tokens)\n",
        "pred_token_ids = list(map(tokenizer.convert_tokens_to_ids, pred_tokens))\n",
        "\n",
        "pred_token_ids = map(lambda tids: tids +[0]*(data.max_seq_len-len(tids)),pred_token_ids)\n",
        "pred_token_ids = np.array(list(pred_token_ids))\n",
        "\n",
        "print('pred_token_ids', pred_token_ids.shape)\n",
        "\n",
        "res = model.predict(pred_token_ids).argmax(axis=-1)\n",
        "\n",
        "for text, sentiment in zip(pred_sentences, res):\n",
        "  print(\" text:\", text)\n",
        "  print(\"  res:\", [\"negative\",\"positive\"][sentiment])"
      ],
      "execution_count": 45,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "pred_token_ids (4, 128)\n",
            " text: That movie was absolutely awful\n",
            "  res: negative\n",
            " text: The acting was a bit lacking\n",
            "  res: negative\n",
            " text: The film was creative and surprising\n",
            "  res: positive\n",
            " text: Absolutely fantastic!\n",
            "  res: positive\n"
          ],
          "name": "stdout"
        }
      ]
    }
  ]
}