{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "colab": {
      "name": "Movie Reviews with bert-for-tf2 on TPU.ipynb",
      "version": "0.3.2",
      "provenance": [],
      "collapsed_sections": [],
      "include_colab_link": true
    },
    "kernelspec": {
      "name": "python3",
      "display_name": "Python 3"
    },
    "accelerator": "TPU"
  },
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "view-in-github",
        "colab_type": "text"
      },
      "source": [
        "<a href=\"https://colab.research.google.com/github/kpe/bert-for-tf2/blob/master/examples/tpu_movie_reviews.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "XgnJDqeiSPqc",
        "colab_type": "text"
      },
      "source": [
        "# Overview\n",
        "\n",
        "This colab notebook demonstrates how to fine-tune a \n",
        "BERT based sentiment classifier on the IMDB Movie Reviews \n",
        "dataset using a freely provided colab TPU.\n",
        "\n",
        "We'll be using the TensorFlow Keras API implementation of BERT from [kpe/bert-for-tf2](https://github.com/kpe/bert-for-tf2)\n",
        "and the pre-trained BERT weights from [google-research/bert](https://github.com/google-research/bert).\n",
        "Instead of fine-tunning all of BERT weights, we'll make use of the [adapter-BERT](https://arxiv.org/abs/1902.00751) architecture to fine-tune only a fraction of the weighs, while keeping the original BERT weights frozen.\n",
        "\n",
        "\n",
        "Currently the combination of a colab TPU and Keras does not work with the Beta Tensorflow 2.0 or the eager execution in TensorFlow 1.14, so we have to stick to non-eager execution and TF 1.14. \n",
        "\n",
        "The main steps towards training a Keras model on a TPU in colab would be:\n",
        " - **Google Storage Bucket** - TPUs currently need write access to a Google Storage Bucket for loading training data or weights and storing model checkpoints.\n",
        " - **GCP Authentication** - once you hava a storage bucket, giving colab the authorization to use it, is realy easy.\n",
        " - **pre-trained BERT** - we have to also copy the pre-trained BERT weights to our storage bucket (because loading the pre-trained checkpoint needs list permissions)\n",
        " - **TFRecord** - to fully utilize the TPU power, we need to feed the training data in the most efficient way possible, for which we'd be using a TFRecordDataset by encoding our training examples into tfrecord files.\n",
        " - **TPU Training** - simlpy create a Keras model inside a `TPU Distribution Strategy` scope would then be enough for placing our model on the TPU ready for training.\n",
        " \n",
        "\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "aB_F9OQqKK_q",
        "colab_type": "text"
      },
      "source": [
        "# Storage Bucket Authentication\n",
        "\n",
        "You need to setup a storage bucket in GCP for storing and loading model weights and feeding data into the TPUs."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "AXsSMfLdLeLC",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "import os\n",
        "import tensorflow as tf"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "XkkQli7WKdvJ",
        "colab_type": "code",
        "outputId": "bbdfe252-bd72-42a7-a92c-28cf7d6d9752",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 35
        }
      },
      "source": [
        "BUCKET = 'YOUR_BUCKET_NAME' #@param {type:\"string\"}\n",
        "\n",
        "OUTPUT_DIR = 'PATH_IN_YOUR_BUCKET'#@param {type:\"string\"}\n",
        "\n",
        "#@markdown Whether or not to clear/delete the directory and create a new one\n",
        "DO_DELETE = False #@param {type:\"boolean\"}\n",
        "\n",
        "\n",
        "OUTPUT_DIR = 'gs://{}/colab/{}'.format(BUCKET, OUTPUT_DIR)\n",
        "from google.colab import auth\n",
        "auth.authenticate_user()\n",
        "\n",
        "if DO_DELETE:\n",
        "  try:\n",
        "    tf.gfile.DeleteRecursively(OUTPUT_DIR)\n",
        "  except:\n",
        "    # Doesn't matter if the directory didn't exist\n",
        "    pass\n",
        "tf.gfile.MakeDirs(OUTPUT_DIR)\n",
        "print('***** Model output directory: {} *****'.format(OUTPUT_DIR))"
      ],
      "execution_count": 2,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "***** Model output directory: gs://kpe/colab/movie_review *****\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "dCpvgG0vwXAZ",
        "colab_type": "text"
      },
      "source": [
        "# Prerequisites"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "qFI2_B8ffipb",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "!pip install tqdm >> /dev/null\n",
        "\n",
        "#!pip install -q tensorflow==2.0.0-beta1 # TPU with Keras on TF 2.0 seems to have problems, so we don't use it yet\n",
        "#!pip install --upgrade 'tensorflow>=2rc0'"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "hsZvic2YxnTz",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "import os\n",
        "import math\n",
        "import datetime\n",
        "\n",
        "\n",
        "from tqdm import tqdm\n",
        "\n",
        "import numpy as np\n",
        "\n",
        "import tensorflow as tf\n",
        "from tensorflow import keras"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Evlk1N78HIXM",
        "colab_type": "code",
        "outputId": "b2691bb5-d823-439d-8888-b9fcda90b047",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 35
        }
      },
      "source": [
        "tf.__version__"
      ],
      "execution_count": 5,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "'1.14.0'"
            ]
          },
          "metadata": {
            "tags": []
          },
          "execution_count": 5
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "HqYo_14wJ_AY",
        "colab_type": "text"
      },
      "source": [
        "To enable the TPU - it seems to be neccessary to do this `tf.config.experimental` call in the beginning of the session:"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "TAdrQqEccIva",
        "colab_type": "code",
        "outputId": "33fd3666-90c7-49e9-e26f-73deca3f3c90",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 52
        }
      },
      "source": [
        "USE_TPU=True\n",
        "try:\n",
        "  # This address identifies the TPU we'll use when configuring TensorFlow.\n",
        "  TPU_WORKER = 'grpc://' + os.environ['COLAB_TPU_ADDR']\n",
        "  tf.config.experimental_connect_to_host(TPU_WORKER)\n",
        "except Exception as ex:\n",
        "  print(ex)\n",
        "  USE_TPU=False\n",
        "\n",
        "print(\"        USE_TPU:\", USE_TPU)\n",
        "print(\"Eager Execution:\", tf.executing_eagerly())\n",
        "\n",
        "assert not tf.executing_eagerly(), \"Eager execution on TPUs have issues currently\""
      ],
      "execution_count": 6,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "        USE_TPU: True\n",
            "Eager Execution: False\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "cp5wfXDx5SPH",
        "colab_type": "text"
      },
      "source": [
        "So lets also pip install the [bert-for-tf2](https://github.com/kpe/bert-for-tf2) python package containing the Keras implementation of BERT."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "jviywGyWyKsA",
        "colab_type": "code",
        "outputId": "5bb3cf1c-9f70-40c6-adb1-715835eba26d",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 104
        }
      },
      "source": [
        "!pip install --upgrade bert-for-tf2 params-flow #>> /dev/null"
      ],
      "execution_count": 7,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "Requirement already up-to-date: bert-for-tf2 in /usr/local/lib/python3.6/dist-packages (0.4.2)\n",
            "Requirement already up-to-date: params-flow in /usr/local/lib/python3.6/dist-packages (0.6.6)\n",
            "Requirement already satisfied, skipping upgrade: py-params>=0.6.4 in /usr/local/lib/python3.6/dist-packages (from bert-for-tf2) (0.6.4)\n",
            "Requirement already satisfied, skipping upgrade: numpy in /usr/local/lib/python3.6/dist-packages (from params-flow) (1.16.4)\n",
            "Requirement already satisfied, skipping upgrade: tqdm in /usr/local/lib/python3.6/dist-packages (from params-flow) (4.28.1)\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "ZtI7cKWDbUVc",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "import params_flow as pf\n",
        "\n",
        "from bert import BertModelLayer\n",
        "from bert import FullTokenizer\n",
        "from bert import load_stock_weights, params_from_pretrained_ckpt\n"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "9U4F7ZQY6sUP",
        "colab_type": "text"
      },
      "source": [
        "# The BERT Pre-Trained\n",
        "\n",
        "The original pre-trained BERT weights are available in a Google Storage Bucket at `gs://bert_models/`, but without list permission which are needed by the TensorFlow APIs used for loading the weights from the pre-trained checkpoint, so we have to copy the pre-trained model to our own bucket:"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "lw_F488eixTV",
        "colab_type": "code",
        "outputId": "66f3498c-5b0b-49c5-abff-0ff53af5aad9",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 104
        }
      },
      "source": [
        "bert_ckpt_dir    = \"gs://bert_models/2018_10_18/uncased_L-12_H-768_A-12\"\n",
        "bert_ckpt_file   = os.path.join(bert_ckpt_dir, \"bert_model.ckpt\")\n",
        "bert_config_file = os.path.join(bert_ckpt_dir, \"bert_config.json\")\n",
        "bert_model_name  = os.path.basename(os.path.dirname(bert_ckpt_file))\n",
        "\n",
        "bert_ckpt_files = [\"bert_config.json\",\n",
        "                   \"bert_model.ckpt.data-00000-of-00001\",\n",
        "                   \"bert_model.ckpt.index\",\n",
        "                   \"bert_model.ckpt.meta\",\n",
        "                   \"vocab.txt\"]\n",
        "\n",
        "gs_bert_ckpt_dir = os.path.join(OUTPUT_DIR, \"bert_models\", bert_model_name)\n",
        "if not tf.io.gfile.exists(gs_bert_ckpt_dir):\n",
        "  cmd = \" \".join([os.path.join(bert_ckpt_dir, bert_file)\n",
        "                   for bert_file in bert_ckpt_files])\n",
        "  cmd = \"gsutil -m cp {} {}\".format(cmd, gs_bert_ckpt_dir)\n",
        "  !$cmd\n",
        "\n",
        "!gsutil ls $gs_bert_ckpt_dir"
      ],
      "execution_count": 9,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_config.json\n",
            "gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt.data-00000-of-00001\n",
            "gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt.index\n",
            "gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt.meta\n",
            "gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/vocab.txt\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "fQiPKscKPpmT",
        "colab_type": "code",
        "outputId": "23fdf90f-ef26-4fe8-d13c-697e97d592b6",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 35
        }
      },
      "source": [
        "bert_ckpt_dir    = gs_bert_ckpt_dir\n",
        "bert_ckpt_file   = os.path.join(bert_ckpt_dir, \"bert_model.ckpt\")\n",
        "bert_config_file = os.path.join(bert_ckpt_dir, \"bert_config.json\")\n",
        "\n",
        "print(\"Using BERT checkpoint from:\", bert_ckpt_dir)"
      ],
      "execution_count": 10,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "Using BERT checkpoint from: gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "pmFYvkylMwXn",
        "colab_type": "text"
      },
      "source": [
        "# The IMDB Movie Review Dataset"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "cx5Yard1w5oe",
        "colab_type": "text"
      },
      "source": [
        "Lets use [kpe/params-flow](https://github.com/kpe/params-flow) to lazy download and upack the IMDB Movie Review Dataset (`params_flow` has been already pip installed as a [kpe/bert-for-tf2](https://github.com/kpe/params-flow) dependency)."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "SziWodfBwy-N",
        "colab_type": "code",
        "outputId": "d62a5f05-0220-4368-9887-d8843af79e80",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 191
        }
      },
      "source": [
        "fetched_file = pf.utils.fetch_url(\"http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz\", fetch_dir=\".data\")\n",
        "unpack_dir   = pf.utils.unpack_archive(fetched_file)\n",
        "data_dir     = os.path.join(unpack_dir, \"aclImdb\")\n",
        "\n",
        "!ls -la $data_dir"
      ],
      "execution_count": 11,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "Already  fetched:  aclImdb_v1.tar.gz\n",
            "already unpacked at: .data/aclImdb_v1\n",
            "total 1732\n",
            "drwxr-xr-x 4 7297 1000   4096 Jun 26  2011 .\n",
            "drwxr-xr-x 3 root root   4096 Sep  4 09:42 ..\n",
            "-rw-r--r-- 1 7297 1000 903029 Jun 11  2011 imdbEr.txt\n",
            "-rw-r--r-- 1 7297 1000 845980 Apr 12  2011 imdb.vocab\n",
            "-rw-r--r-- 1 7297 1000   4037 Jun 26  2011 README\n",
            "drwxr-xr-x 4 7297 1000   4096 Apr 12  2011 test\n",
            "drwxr-xr-x 5 7297 1000   4096 Jun 26  2011 train\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "qYwlvtPK08Ec",
        "colab_type": "text"
      },
      "source": [
        "# The TFRecords Conversion\n",
        "\n",
        "We would now convert the raw dataset into TFRecord files, for which we need the following functions:\n",
        " - load every dataset file\n",
        " - preprocess each sample by removing the `</ br>` markup\n",
        " - encode the label to an integer (i.e. 0 or 1) and the text by tokenizing with the BERT tokenizer and taking the token integer ids (a vocab file is provided with the pre-trained model)\n",
        " - serialize the so encoded examples to a tfrecord file"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "REOlJ8ipx8lW",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "from functools import partial\n",
        "from glob import glob\n",
        "from multiprocessing import Pool\n",
        "\n",
        "\n",
        "def load_sample(path):\n",
        "    \"\"\"Loads an IMDB Movie Reviews data sample from a file.\"\"\"\n",
        "    label   = path.split('/')[-2]\n",
        "    with open(path, \"r\") as f:\n",
        "        content = f.read()\n",
        "    return content, label\n",
        "    \n",
        "def preprocess_sample(content, label):\n",
        "    content = content.replace(\"<br />\", \" \")\n",
        "    return content, label\n",
        "    \n",
        "def encode_sample(content, label, tokenizer):\n",
        "    content = tokenizer.tokenize(content)\n",
        "    content = tokenizer.convert_tokens_to_ids(content)\n",
        "    label = int(label == \"pos\")\n",
        "    return content, label\n",
        "\n",
        "def serialize_example(token_ids, label):\n",
        "    feature = {\n",
        "        \"token_ids\": tf.train.Feature(int64_list=tf.train.Int64List(value=token_ids)),\n",
        "        \"label\":     tf.train.Feature(int64_list=tf.train.Int64List(value=[label]))\n",
        "    }\n",
        "    proto = tf.train.Example(features=tf.train.Features(feature=feature))\n",
        "    return proto.SerializeToString()\n",
        "\n",
        "def to_tfrecord(file_path, tokenizer):\n",
        "    sample = load_sample(file_path)\n",
        "    sample = preprocess_sample(*sample)\n",
        "    sample = encode_sample(*sample, tokenizer=tokenizer)\n",
        "    sample = serialize_example(*sample)\n",
        "    return sample\n",
        "\n",
        "def convert_to_tfrecord_file(file_name, ds_dir, serializer_fn):\n",
        "    with tf.python_io.TFRecordWriter(file_name) as writer:    \n",
        "        all_files = glob(os.path.join(ds_dir, \"pos/*\"))\n",
        "        all_files += glob(os.path.join(ds_dir, \"neg/*\"))\n",
        "        with Pool() as pool:\n",
        "            protos = pool.imap_unordered(serializer_fn, all_files)\n",
        "            for proto in tqdm(protos, total=len(all_files)):\n",
        "                writer.write(proto)"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "igZnhhTA3hmH",
        "colab_type": "text"
      },
      "source": [
        "We would also store the `tfrecords` in out bucket:"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "VtvWU6630yPS",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "train_tfrecord_file = os.path.join(OUTPUT_DIR, \"data\", \"train.tfrecord\")\n",
        "test_tfrecord_file  = os.path.join(OUTPUT_DIR, \"data\", \"test.tfrecord\")"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "sNtdVOHT01ha",
        "colab_type": "text"
      },
      "source": [
        "Now we can instantiate the BERT tokenizer and do the TFRecord conversion:"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "NyfPI4dayRXm",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "tokenizer = FullTokenizer(os.path.join(bert_ckpt_dir, \"vocab.txt\"))\n",
        "\n",
        "def serialize_to_tfrecord(ds_file):\n",
        "    return to_tfrecord(ds_file, tokenizer)\n",
        "        \n",
        "if not all([tf.io.gfile.exists(train_tfrecord_file),\n",
        "            tf.io.gfile.exists(test_tfrecord_file)]):\n",
        "  print(\"Preparing the [train, test].tfrecord files...\")\n",
        "  \n",
        "  convert_to_tfrecord_file(train_tfrecord_file, \n",
        "                           os.path.join(data_dir, \"train\"),\n",
        "                           serialize_to_tfrecord)\n",
        "  convert_to_tfrecord_file(test_tfrecord_file, \n",
        "                           os.path.join(data_dir, \"test\"),\n",
        "                           serialize_to_tfrecord)"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "P1tmUIc97AEg",
        "colab_type": "text"
      },
      "source": [
        "For reading the tfrecord files we would use a `TFRecordDataset`. For batching the data we must make sure all sequences have the same lenght, so we must also trim and pad them:"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "F9rORqVi5sii",
        "colab_type": "code",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 35
        },
        "outputId": "e6f252db-51a4-455f-ff1e-b3d76890939f"
      },
      "source": [
        "def tfrecord_to_dataset(filenames):\n",
        "  ds = tf.data.TFRecordDataset(filenames)\n",
        "  feature_description = {\n",
        "      \"token_ids\":  tf.io.VarLenFeature(tf.int64),\n",
        "      \"label\":      tf.io.FixedLenFeature([], tf.int64, default_value=-1)\n",
        "  }\n",
        "\n",
        "  def parse_proto(proto):\n",
        "    example = tf.io.parse_single_example(proto, feature_description)\n",
        "    token_ids, label = example[\"token_ids\"], example[\"label\"]\n",
        "    token_ids = tf.sparse_tensor_to_dense(token_ids)\n",
        "    return token_ids, label\n",
        "\n",
        "  return ds.map(parse_proto)\n",
        "\n",
        "pad_id, cls_id, sep_id = tokenizer.convert_tokens_to_ids([\"[PAD]\", \"[CLS]\", \"[SEP]\"])\n",
        "print(\"pad cls sep:\", pad_id, cls_id, sep_id)\n",
        "\n",
        "def create_pad_example_fn(pad_len, \n",
        "                          pad_id=pad_id, \n",
        "                          cls_id=cls_id, \n",
        "                          sep_id=sep_id,\n",
        "                          trim_beginning=True):\n",
        "  def pad_example(x, label):\n",
        "    seq_len = pad_len - 2\n",
        "    x = x[-seq_len:] if trim_beginning else x[:seq_len]\n",
        "    x = tf.pad(x, [[0, seq_len - tf.shape(x)[-1]]], constant_values=pad_id)\n",
        "    x = tf.concat([[cls_id], x, [sep_id]], axis=-1)\n",
        "    return  tf.reshape(x, (pad_len,)), tf.reshape(label, ())\n",
        "  return pad_example    \n"
      ],
      "execution_count": 15,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "pad cls sep: 0 101 102\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "R454eTyqIKnX",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "train_tfrecord_file = os.path.join(OUTPUT_DIR, \"data\", \"train.tfrecord\")\n",
        "test_tfrecord_file  = os.path.join(OUTPUT_DIR, \"data\", \"test.tfrecord\")"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "hjlcx1E9-2ns",
        "colab_type": "text"
      },
      "source": [
        "Because BERT can handle up to 512 tokens, and because BERT computational and memory requirements scale quadratically with the input sequence length, we will have to trim the sequences to a shorter size, but let's first check the sequence length distribution, and how many examples would be affected, if we trim all sequences to 512 tokens:"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "lbQjy2Kh7rNC",
        "colab_type": "code",
        "outputId": "102b7a64-4fcf-49f9-d9e7-fdf8419e63c5",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 580
        }
      },
      "source": [
        "import pandas as pd\n",
        "\n",
        "train_lens = []\n",
        "test_lens  = []\n",
        "\n",
        "count = 0\n",
        "\n",
        "def get_sequence_lengths(tfrecord_file):\n",
        "  res = []\n",
        "\n",
        "  def sample_seq_len(tok_id, lab):\n",
        "    return tf.shape(tok_id)[0]\n",
        "  \n",
        "  ds = tfrecord_to_dataset([tfrecord_file]).map(sample_seq_len).batch(128)\n",
        "  if tf.executing_eagerly():\n",
        "    for seq_len in ds:\n",
        "      res.extend(res)\n",
        "  else:\n",
        "    it = tf.compat.v1.data.make_one_shot_iterator(ds)\n",
        "    seq_lens = it.get_next()\n",
        "    with tf.Session() as sess:\n",
        "      try:\n",
        "        while True:\n",
        "          res.extend(list(sess.run(seq_lens)))\n",
        "      except Exception as ex:\n",
        "        pass\n",
        "  return res\n",
        "\n",
        "\n",
        "def show_drop_count(lens, max_seq_len = 512, name=\"ds\"):\n",
        "    df = pd.DataFrame(lens)\n",
        "    df.hist(bins=100)\n",
        "    drop_count = df[df[0]>max_seq_len].shape[0]\n",
        "    print(\"{:>5s} drop count: {} of {} - {:5.2f}%\".format(name, drop_count, len(lens), 100*drop_count/len(lens)))\n",
        "\n",
        "train_lens = get_sequence_lengths(train_tfrecord_file)\n",
        "test_lens = get_sequence_lengths(test_tfrecord_file)\n",
        "\n",
        "    \n",
        "show_drop_count(train_lens, name=\"train\")\n",
        "show_drop_count(test_lens, name=\"test\")"
      ],
      "execution_count": 17,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "train drop count: 3290 of 25000 - 13.16%\n",
            " test drop count: 3057 of 25000 - 12.23%\n"
          ],
          "name": "stdout"
        },
        {
          "output_type": "display_data",
          "data": {
            "image/png": "iVBORw0KGgoAAAANSUhEUgAAAYAAAAEICAYAAABWJCMKAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDMuMC4zLCBo\ndHRwOi8vbWF0cGxvdGxpYi5vcmcvnQurowAAF9NJREFUeJzt3X9sXeV9x/H3p+GnME3CYFYWoiWs\n6SoKawpWQteqskENIf0jVOqqdAgSCnLXhqnV2IRp1UH5IaUbLRoqo0qXjNB2uBktwgphNE2xEH9Q\nkrQhP2AQF8KKlRK1CQFDRxf23R/3cXrxbN8fvvY918/nJV3dc57znHOfb47jj8+Pe68iAjMzy8+7\nmj0AMzNrDgeAmVmmHABmZplyAJiZZcoBYGaWKQeAmVmmHABmZplyAJjVSNIZkh6U9IaklyT9ZbPH\nZFaPE5o9ALMWdDfwO6AdWAQ8LOnpiNjX3GGZ1UZ+J7BZ9SSdBhwBzouI51Pbd4DBiOhp6uDMauRT\nQGa1eS9wbPiXf/I08P4mjcesbg4As9q0Aa+NaDsKnN6EsZhNiAPArDZDwLtHtL0beL0JYzGbEAeA\nWW2eB06QtLCs7QOALwBby/FFYLMaSeoFAriW0l1AW4A/911A1mp8BGBWu88DpwKHgPuBz/mXv7Ui\nHwGYmWXKRwBmZplyAJiZZcoBYGaWKQeAmVmmCv1hcGeeeWbMnz+/rnXfeOMNTjvttMYOqAlcR7G4\njmKZLnVAY2vZuXPnryPirEr9Ch0A8+fPZ8eOHXWt29/fT2dnZ2MH1ASuo1hcR7FMlzqgsbVIeqma\nfj4FZGaWKQeAmVmmHABmZplyAJiZZcoBYGaWKQeAmVmmHABmZplyAJiZZapiAEg6RdJTkp6WtE/S\nV1P7vZJelLQrPRaldkm6S9KApN2SLijb1ipJ+9Nj1eSVZWZmlVTzTuC3gIsjYkjSicATkh5Jy/4u\nIh4Y0f8yYGF6LAHuAZZIOgO4Ceig9G1KOyX1RcSRRhTSaPN7Hj4+fWDtx5s4EjOzyVHxCCBKhtLs\niekx3rfIrADuS+s9CcySNAe4FNgaEYfTL/2twLKJDd/MzOpV1TeCSZoB7ATeA9wdETdIuhf4EKUj\nhG1AT0S8JWkzsDYinkjrbgNuADqBUyLittT+FeC3EXHHiNfqBroB2tvbL+zt7a2rsKGhIdra2upa\nF2DP4NHj0+fPnVn3diZqonUUhesoFtdRPI2spaura2dEdFTqV9WHwUXE28AiSbOAByWdB9wI/Ao4\nCVhH6Zf8LfUP+fhrrUvbo6OjI+r9cKSJfrDS6vJTQFfUv52Jmi4fduU6isV1FE8zaqnpLqCIeBV4\nDFgWEQfTaZ63gH8FFqdug8C8stXOTm1jtZuZWRNUcxfQWekvfySdCnwM+M90Xh9JAi4H9qZV+oCr\n0t1AFwFHI+Ig8CiwVNJsSbOBpanNzMyaoJpTQHOAjek6wLuATRGxWdJPJJ0FCNgF/FXqvwVYDgwA\nbwJXA0TEYUm3AttTv1si4nDjSjEzs1pUDICI2A18cJT2i8foH8CaMZZtADbUOEYzM5sEfiewmVmm\nHABmZpkq9HcCT7Xyd/+amU13PgIwM8uUA8DMLFMOADOzTDkAzMwy5QAwM8uUA8DMLFMOADOzTDkA\nzMwy5QAwM8uUA8DMLFMOADOzTDkAzMwy5QAwM8uUA8DMLFMOADOzTDkAzMwy5QAwM8tUxQCQdIqk\npyQ9LWmfpK+m9gWSfippQNL3JZ2U2k9O8wNp+fyybd2Y2p+TdOlkFWVmZpVVcwTwFnBxRHwAWAQs\nk3QR8DXgzoh4D3AEuCb1vwY4ktrvTP2QdC6wEng/sAz4Z0kzGlmMmZlVr2IARMlQmj0xPQK4GHgg\ntW8ELk/TK9I8afklkpTaeyPirYh4ERgAFjekCjMzq5kionKn0l/qO4H3AHcD/wg8mf7KR9I84JGI\nOE/SXmBZRLyclv0CWALcnNb5bmpfn9Z5YMRrdQPdAO3t7Rf29vbWVdjQ0BBtbW01rbNn8Oio7efP\nnVnXGBqhnjqKyHUUi+sonkbW0tXVtTMiOir1O6GajUXE28AiSbOAB4H3TXB8473WOmAdQEdHR3R2\ndta1nf7+fmpdd3XPw6O2H7iivjE0Qj11FJHrKBbXUTzNqKWmu4Ai4lXgMeBDwCxJwwFyNjCYpgeB\neQBp+UzgN+Xto6xjZmZTrJq7gM5Kf/kj6VTgY8CzlILgk6nbKuChNN2X5knLfxKl80x9wMp0l9AC\nYCHwVKMKMTOz2lRzCmgOsDFdB3gXsCkiNkt6BuiVdBvwc2B96r8e+I6kAeAwpTt/iIh9kjYBzwDH\ngDXp1JKZmTVBxQCIiN3AB0dpf4FR7uKJiP8G/mKMbd0O3F77MM3MrNH8TmAzs0w5AMzMMuUAMDPL\nlAPAzCxTDgAzs0w5AMzMMuUAMDPLlAPAzCxTDgAzs0w5AMzMMuUAMDPLlAPAzCxTDgAzs0w5AMzM\nMuUAMDPLlAPAzCxTDgAzs0w5AMzMMuUAMDPLlAPAzCxTDgAzs0xVDABJ8yQ9JukZSfskfSG13yxp\nUNKu9Fhets6NkgYkPSfp0rL2ZaltQFLP5JRkZmbVOKGKPseA6yPiZ5JOB3ZK2pqW3RkRd5R3lnQu\nsBJ4P/BHwI8lvTctvhv4GPAysF1SX0Q804hCzMysNhUDICIOAgfT9OuSngXmjrPKCqA3It4CXpQ0\nACxOywYi4gUASb2prwPAzKwJFBHVd5bmA48D5wF/A6wGXgN2UDpKOCLpm8CTEfHdtM564JG0iWUR\ncW1qvxJYEhHXjXiNbqAboL29/cLe3t66ChsaGqKtra2mdfYMHq3Y5/y5M+saT73qqaOIXEexuI7i\naWQtXV1dOyOio1K/ak4BASCpDfgB8MWIeE3SPcCtQKTnrwOfqXO8x0XEOmAdQEdHR3R2dta1nf7+\nfmpdd3XPwxX7HLiivvHUq546ish1FIvrKJ5m1FJVAEg6kdIv/+9FxA8BIuKVsuXfBjan2UFgXtnq\nZ6c2xmk3M7MpVs1dQALWA89GxDfK2ueUdfsEsDdN9wErJZ0saQGwEHgK2A4slLRA0kmULhT3NaYM\nMzOrVTVHAB8GrgT2SNqV2r4EfFrSIkqngA4AnwWIiH2SNlG6uHsMWBMRbwNIug54FJgBbIiIfQ2s\nxczMalDNXUBPABpl0ZZx1rkduH2U9i3jrWdmZlPH7wQ2M8uUA8DMLFMOADOzTDkAzMwy5QAwM8uU\nA8DMLFMOADOzTDkAzMwy5QAwM8uUA8DMLFMOADOzTDkAzMwy5QAwM8uUA8DMLFMOADOzTDkAzMwy\n5QAwM8uUA8DMLFMOADOzTDkAzMwyVTEAJM2T9JikZyTtk/SF1H6GpK2S9qfn2aldku6SNCBpt6QL\nyra1KvXfL2nV5JVlZmaVVHMEcAy4PiLOBS4C1kg6F+gBtkXEQmBbmge4DFiYHt3APVAKDOAmYAmw\nGLhpODTMzGzqVQyAiDgYET9L068DzwJzgRXAxtRtI3B5ml4B3BclTwKzJM0BLgW2RsThiDgCbAWW\nNbQaMzOrmiKi+s7SfOBx4DzgvyJiVmoXcCQiZknaDKyNiCfSsm3ADUAncEpE3JbavwL8NiLuGPEa\n3ZSOHGhvb7+wt7e3rsKGhoZoa2uraZ09g0cr9jl/7sy6xlOveuooItdRLK6jeBpZS1dX186I6KjU\n74RqNyipDfgB8MWIeK30O78kIkJS9UkyjohYB6wD6OjoiM7Ozrq209/fT63rru55uGKfA1fUN556\n1VNHEbmOYnEdxdOMWqq6C0jSiZR++X8vIn6Yml9Jp3ZIz4dS+yAwr2z1s1PbWO1mZtYE1dwFJGA9\n8GxEfKNsUR8wfCfPKuChsvar0t1AFwFHI+Ig8CiwVNLsdPF3aWozM7MmqOYU0IeBK4E9knalti8B\na4FNkq4BXgI+lZZtAZYDA8CbwNUAEXFY0q3A9tTvlog43JAqzMysZhUDIF3M1RiLLxmlfwBrxtjW\nBmBDLQM0M7PJ4XcCm5llygFgZpapqm8DtZL5ZbeKHlj78SaOxMxsYnwEYGaWKQeAmVmmHABmZply\nAJiZZcoBYGaWKQeAmVmmHABmZplyAJiZZcoBYGaWKQeAmVmmHABmZplyAJiZZcoBYGaWKQeAmVmm\nHABmZplyAJiZZcoBYGaWqYoBIGmDpEOS9pa13SxpUNKu9FhetuxGSQOSnpN0aVn7stQ2IKmn8aWY\nmVktqjkCuBdYNkr7nRGxKD22AEg6F1gJvD+t88+SZkiaAdwNXAacC3w69TUzsyap+J3AEfG4pPlV\nbm8F0BsRbwEvShoAFqdlAxHxAoCk3tT3mZpHbGZmDaGIqNypFACbI+K8NH8zsBp4DdgBXB8RRyR9\nE3gyIr6b+q0HHkmbWRYR16b2K4ElEXHdKK/VDXQDtLe3X9jb21tXYUNDQ7S1tdW0zp7Bo3W9FsD5\nc2fWve546qmjiFxHsbiO4mlkLV1dXTsjoqNSv4pHAGO4B7gViPT8deAzdW7rHSJiHbAOoKOjIzo7\nO+vaTn9/P7Wuu7rn4bpeC+DAFbW9VrXqqaOIXEexuI7iaUYtdQVARLwyPC3p28DmNDsIzCvrenZq\nY5x2MzNrgrpuA5U0p2z2E8DwHUJ9wEpJJ0taACwEngK2AwslLZB0EqULxX31D9vMzCaq4hGApPuB\nTuBMSS8DNwGdkhZROgV0APgsQETsk7SJ0sXdY8CaiHg7bec64FFgBrAhIvY1vBozM6taNXcBfXqU\n5vXj9L8duH2U9i3AlppGZ2Zmk8bvBDYzy5QDwMwsUw4AM7NM1fs+gGlj/gTu/Tcza2U+AjAzy5QD\nwMwsUw4AM7NMOQDMzDLlADAzy5QDwMwsUw4AM7NMOQDMzDKV/RvBGqX8DWUH1n68iSMxM6uOjwDM\nzDKV5RGAP/7BzMxHAGZm2XIAmJllygFgZpYpB4CZWaYcAGZmmaoYAJI2SDokaW9Z2xmStkran55n\np3ZJukvSgKTdki4oW2dV6r9f0qrJKcfMzKpVzRHAvcCyEW09wLaIWAhsS/MAlwEL06MbuAdKgQHc\nBCwBFgM3DYeGmZk1R8UAiIjHgcMjmlcAG9P0RuDysvb7ouRJYJakOcClwNaIOBwRR4Ct/P9QMTOz\nKaSIqNxJmg9sjojz0vyrETErTQs4EhGzJG0G1kbEE2nZNuAGoBM4JSJuS+1fAX4bEXeM8lrdlI4e\naG9vv7C3t7euwoaGhmhraxt12Z7Bo3Vts1rnz53ZsG2NV0crcR3F4jqKp5G1dHV17YyIjkr9JvxO\n4IgISZVTpPrtrQPWAXR0dERnZ2dd2+nv72esdVdP9juB97xxfHKinws0Xh2txHUUi+sonmbUUu9d\nQK+kUzuk50OpfRCYV9bv7NQ2VruZmTVJvQHQBwzfybMKeKis/ap0N9BFwNGIOAg8CiyVNDtd/F2a\n2szMrEkqngKSdD+lc/hnSnqZ0t08a4FNkq4BXgI+lbpvAZYDA8CbwNUAEXFY0q3A9tTvlogYeWHZ\nzMymUMUAiIhPj7HoklH6BrBmjO1sADbUNDozM5s0fiewmVmmHABmZplyAJiZZcoBYGaWqSy/EnIq\n+cvizayofARgZpapbI4A/EXwZmbv5CMAM7NMOQDMzDLlADAzy1Q21wCKwHcEmVmR+AjAzCxTDgAz\ns0w5AMzMMuUAMDPLlAPAzCxTvguoSXxHkJk1m48AzMwy5QAwM8uUA8DMLFMTCgBJByTtkbRL0o7U\ndoakrZL2p+fZqV2S7pI0IGm3pAsaUYCZmdWnEReBuyLi12XzPcC2iFgrqSfN3wBcBixMjyXAPek5\ne74gbGbNMBmngFYAG9P0RuDysvb7ouRJYJakOZPw+mZmVoWJBkAAP5K0U1J3amuPiINp+ldAe5qe\nC/yybN2XU5uZmTWBIqL+laW5ETEo6Q+BrcBfA30RMausz5GImC1pM7A2Ip5I7duAGyJix4htdgPd\nAO3t7Rf29vbWNbahoSHa2tqOz+8ZPFrXdqba+XNnvmN+ZB2tynUUi+sonkbW0tXVtTMiOir1m9A1\ngIgYTM+HJD0ILAZekTQnIg6mUzyHUvdBYF7Z6mentpHbXAesA+jo6IjOzs66xtbf30/5uqtb5Csh\nD1zR+Y75kXW0KtdRLK6jeJpRS92ngCSdJun04WlgKbAX6ANWpW6rgIfSdB9wVbob6CLgaNmpIjMz\nm2ITOQJoBx6UNLydf4uI/5C0Hdgk6RrgJeBTqf8WYDkwALwJXD2B1zYzswmqOwAi4gXgA6O0/wa4\nZJT2ANbU+3q58C2hZjZV/E7gApvf8zB7Bo++IxTMzBrFAWBmlil/HHSL8KkhM2s0HwGYmWXKAWBm\nlikHgJlZphwAZmaZ8kXgFjTebaG+QGxm1fIRgJlZpnwEMM34dlEzq5aPAMzMMuUAMDPLlE8BTWM+\nHWRm43EAZMJhYGYjOQAy5DAwM3AAZM9hYJYvB4Ad5zAwy4vvAjIzy5SPAGxUYx0NjPUxFD5iMGs9\nDgCrqJqvpPTpI7PWM60DwN+l2xyj/btff/4xOqd+KGY2jikPAEnLgH8CZgD/EhFrp3oM1hyNCuSx\njjBGbt9HImbjm9IAkDQDuBv4GPAysF1SX0Q8M5XjsNZWbZD4tJTZ+Kb6CGAxMBARLwBI6gVWAA4A\nm1TVXLx2YFhuFBFT92LSJ4FlEXFtmr8SWBIR15X16Qa60+yfAs/V+XJnAr+ewHCLwnUUi+solulS\nBzS2lj+OiLMqdSrcReCIWAesm+h2JO2IiI4GDKmpXEexuI5imS51QHNqmeo3gg0C88rmz05tZmY2\nxaY6ALYDCyUtkHQSsBLom+IxmJkZU3wKKCKOSboOeJTSbaAbImLfJL3chE8jFYTrKBbXUSzTpQ5o\nQi1TehHYzMyKwx8GZ2aWKQeAmVmmpl0ASFom6TlJA5J6mj2eSiQdkLRH0i5JO1LbGZK2Stqfnmen\ndkm6K9W2W9IFTRz3BkmHJO0ta6t53JJWpf77Ja0qUC03SxpM+2WXpOVly25MtTwn6dKy9qb97Ema\nJ+kxSc9I2ifpC6m9pfbJOHW01P5Ir3+KpKckPZ1q+WpqXyDpp2lc3083xCDp5DQ/kJbPr1TjhEXE\ntHlQurD8C+Ac4CTgaeDcZo+rwpgPAGeOaPsHoCdN9wBfS9PLgUcAARcBP23iuD8KXADsrXfcwBnA\nC+l5dpqeXZBabgb+dpS+56afq5OBBennbUazf/aAOcAFafp04Pk01pbaJ+PU0VL7I41NQFuaPhH4\nafq33gSsTO3fAj6Xpj8PfCtNrwS+P16NjRjjdDsCOP5RExHxO2D4oyZazQpgY5reCFxe1n5flDwJ\nzJI0pxkDjIjHgcMjmmsd96XA1og4HBFHgK3Asskf/TuNUctYVgC9EfFWRLwIDFD6uWvqz15EHIyI\nn6Xp14Fngbm02D4Zp46xFHJ/AKR/26E0e2J6BHAx8EBqH7lPhvfVA8AlksTYNU7YdAuAucAvy+Zf\nZvwfniII4EeSdqr0MRgA7RFxME3/CmhP00Wvr9ZxF72e69LpkQ3Dp05ogVrSqYMPUvqLs2X3yYg6\noAX3h6QZknYBhyiF6S+AVyPi2CjjOj7mtPwo8AdMYi3TLQBa0Uci4gLgMmCNpI+WL4zSMWDL3avb\nquMucw/wJ8Ai4CDw9eYOpzqS2oAfAF+MiNfKl7XSPhmljpbcHxHxdkQsovSpB4uB9zV5SO8w3QKg\n5T5qIiIG0/Mh4EFKPySvDJ/aSc+HUvei11fruAtbT0S8kv7z/i/wbX5/yF3YWiSdSOmX5vci4oep\nueX2yWh1tOL+KBcRrwKPAR+idLpt+E245eM6Pua0fCbwGyaxlukWAC31UROSTpN0+vA0sBTYS2nM\nw3dfrAIeStN9wFXpDo6LgKNlh/dFUOu4HwWWSpqdDumXpramG3Ft5ROU9guUalmZ7thYACwEnqLJ\nP3vpXPF64NmI+EbZopbaJ2PV0Wr7I435LEmz0vSplL4H5VlKQfDJ1G3kPhneV58EfpKO2saqceKm\n8qr4VDwo3d3wPKVzbV9u9ngqjPUcSlf3nwb2DY+X0nm/bcB+4MfAGfH7uwruTrXtATqaOPb7KR2K\n/w+lc5LX1DNu4DOULmoNAFcXqJbvpLHuTv8B55T1/3Kq5TngsiL87AEfoXR6ZzewKz2Wt9o+GaeO\nltof6fX/DPh5GvNe4O9T+zmUfoEPAP8OnJzaT0nzA2n5OZVqnOjDHwVhZpap6XYKyMzMquQAMDPL\nlAPAzCxTDgAzs0w5AMzMMuUAMDPLlAPAzCxT/wePzVMnf2yyxAAAAABJRU5ErkJggg==\n",
            "text/plain": [
              "<Figure size 432x288 with 1 Axes>"
            ]
          },
          "metadata": {
            "tags": []
          }
        },
        {
          "output_type": "display_data",
          "data": {
            "image/png": "iVBORw0KGgoAAAANSUhEUgAAAYgAAAEICAYAAABF82P+AAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDMuMC4zLCBo\ndHRwOi8vbWF0cGxvdGxpYi5vcmcvnQurowAAF29JREFUeJzt3X+s5XWd3/HnSwQ0jCuw2JspkA7W\n2WxQKuINYNdsrhphwBA0sQZLZFA2s92FVFPaOO6mxR9Lgs2qqSnLdixT0XUdqUqcIJbOIjfGP5Af\n7vBjQOQKY2QyMlmB0aMt27Hv/nE+M57Ofu/MveeemXvO5flITs73vL8/zufN98KL749zTqoKSZIO\n9pLlHoAkaTwZEJKkTgaEJKmTASFJ6mRASJI6GRCSpE4GhCSpkwEhLVKSk5PcluSXSX6c5F8u95ik\nI+Glyz0AaQLdCPw9MAWcDXwzyYNVtWN5hyWNVvwktbRwSU4AngNeV1U/bLUvAruqauOyDk4aMU8x\nSYvzO8C+/eHQPAi8dpnGIx0xBoS0OKuAnx9U2wu8YhnGIh1RBoS0OD3gtw6q/Rbwi2UYi3REGRDS\n4vwQeGmStQO11wNeoNaK40VqaZGSbAEK+AP6dzHdAfxz72LSSuMRhLR4fwy8HNgDfBn4I8NBK5FH\nEJKkTh5BSJI6GRCSpE4GhCSpkwEhSeo01l/Wd8opp9SaNWuGWveXv/wlJ5xwwmgHtMzsaTLY02RY\niT1Bv68f/OAHf1dVr1rqtsY6INasWcP9998/1Lqzs7PMzMyMdkDLzJ4mgz1NhpXYE/T7estb3vLj\nUWzLU0ySpE4GhCSpkwEhSepkQEiSOhkQkqROBoQkqZMBIUnqZEBIkjoZEJKkTmP9SerltGbjNw9M\n77zhHcs4EklaHh5BSJI6GRCSpE4GhCSp02EDIsnLktyb5MEkO5J8rNU/n+SpJNvb4+xWT5LPJplL\n8lCScwa2tT7JE+2x/si1JUlaqoVcpH4BeGtV9ZIcC3w3ybfavH9XVV89aPmLgLXtcR5wE3BekpOB\n64BpoIAHkmytqudG0YgkabQOewRRfb328tj2qEOscinwhbbePcCJSVYDFwLbqurZFgrbgHVLG74k\n6UhJ1aH+W98WSo4BHgBeA9xYVR9O8nngTfSPMO4CNlbVC0luB26oqu+2de8CPgzMAC+rqj9r9X8P\n/K+q+vOD3msDsAFgamrqjVu2bBmqsV6vx6pVq4ZaF+DhXXsPTJ916iuH3s4oLbWncWRPk8GeJkev\n1+OSSy55oKqml7qtBX0Ooqp+DZyd5ETgtiSvAz4C/BQ4DthEPwQ+vtQBVdWmtj2mp6dr2F98Wuqv\nRV05+DmIy4ffziitxF/AsqfJYE+TY3Z2dmTbWtRdTFX1PHA3sK6qdrfTSC8A/w04ty22Czh9YLXT\nWm2+uiRpDC3kLqZXtSMHkrwceDvwg3ZdgSQB3gk80lbZClzR7mY6H9hbVbuBO4ELkpyU5CTgglaT\nJI2hhZxiWg3c0q5DvAS4tapuT/LtJK8CAmwH/lVb/g7gYmAO+BXwfoCqejbJJ4D72nIfr6pnR9eK\nJGmUDhsQVfUQ8IaO+lvnWb6Aq+eZtxnYvMgxSpKWgZ+kliR1MiAkSZ0MCElSJwNCktTJgJAkdTIg\nJEmdDAhJUicDQpLUyYCQJHUyICRJnQwISVInA0KS1MmAkCR1MiAkSZ0MCElSJwNCktTJgJAkdTIg\nJEmdDAhJUicDQpLU6bABkeRlSe5N8mCSHUk+1upnJPlekrkkX0lyXKsf317PtflrBrb1kVZ/PMmF\nR6opSdLSLeQI4gXgrVX1euBsYF2S84FPAp+pqtcAzwFXteWvAp5r9c+05UhyJnAZ8FpgHfAXSY4Z\nZTOSpNE5bEBUX6+9PLY9Cngr8NVWvwV4Z5u+tL2mzX9bkrT6lqp6oaqeAuaAc0fShSRp5F66kIXa\n/+k/ALwGuBH4EfB8Ve1rizwNnNqmTwV+AlBV+5LsBX671e8Z2OzgOoPvtQHYADA1NcXs7OziOmp6\nvd7Q6wJce9a+A9NL2c4oLbWncWRPk8GeJkev1zv8Qgu0oICoql8DZyc5EbgN+N2RjeAfvtcmYBPA\n9PR0zczMDLWd2dlZhl0X4MqN3zwwvfPy4bczSkvtaRzZ02Swp8kxytBb1F1MVfU8cDfwJuDEJPsD\n5jRgV5veBZwO0Oa/EvjZYL1jHUnSmFnIXUyvakcOJHk58HbgMfpB8e622HrgG216a3tNm//tqqpW\nv6zd5XQGsBa4d1SNSJJGayGnmFYDt7TrEC8Bbq2q25M8CmxJ8mfA3wI3t+VvBr6YZA54lv6dS1TV\njiS3Ao8C+4Cr26krSdIYOmxAVNVDwBs66k/ScRdSVf1v4F/Ms63rgesXP0xJ0tHmJ6klSZ0MCElS\nJwNCktTJgJAkdTIgJEmdDAhJUicDQpLUyYCQJHUyICRJnRb0ba4vdmsGv9n1hncs40gk6ejxCEKS\n1MmAkCR1MiAkSZ0MCElSJwNCktTJgJAkdTIgJEmdDAhJUicDQpLU6bABkeT0JHcneTTJjiQfbPWP\nJtmVZHt7XDywzkeSzCV5PMmFA/V1rTaXZOORaUmSNAoL+aqNfcC1VfX9JK8AHkiyrc37TFX9+eDC\nSc4ELgNeC/xj4G+S/E6bfSPwduBp4L4kW6vq0VE0IkkarcMGRFXtBna36V8keQw49RCrXApsqaoX\ngKeSzAHntnlzVfUkQJItbVkDQpLGUKpq4Qsna4DvAK8D/g1wJfBz4H76RxnPJfnPwD1V9VdtnZuB\nb7VNrKuqP2j19wHnVdU1B73HBmADwNTU1Bu3bNkyVGO9Xo9Vq1YNtS7Aw7v2dtbPOvWVQ29zqZba\n0ziyp8lgT5Oj1+txySWXPFBV00vd1oK/zTXJKuBrwIeq6udJbgI+AVR7/hTwgaUOqKo2AZsApqen\na2ZmZqjtzM7Osth1B7+1db5/NDsvH248ozBMT+POniaDPU2O2dnZkW1rQQGR5Fj64fClqvo6QFU9\nMzD/c8Dt7eUu4PSB1U9rNQ5RlySNmYXcxRTgZuCxqvr0QH31wGLvAh5p01uBy5Icn+QMYC1wL3Af\nsDbJGUmOo38he+to2pAkjdpCjiB+D3gf8HCS7a32J8B7k5xN/xTTTuAPAapqR5Jb6V983gdcXVW/\nBkhyDXAncAywuap2jLAXSdIILeQupu8C6Zh1xyHWuR64vqN+x6HWkySNDz9JLUnqZEBIkjoZEJKk\nTgaEJKmTASFJ6mRASJI6GRCSpE4GhCSpkwEhSepkQEiSOhkQkqROBoQkqZMBIUnqZEBIkjoZEJKk\nTgaEJKmTASFJ6rSQnxzVgDUbv3lgeucN71jGkUjSkeURhCSpkwEhSep02IBIcnqSu5M8mmRHkg+2\n+slJtiV5oj2f1OpJ8tkkc0keSnLOwLbWt+WfSLL+yLUlSVqqhRxB7AOuraozgfOBq5OcCWwE7qqq\ntcBd7TXARcDa9tgA3AT9QAGuA84DzgWu2x8qkqTxc9iAqKrdVfX9Nv0L4DHgVOBS4Ja22C3AO9v0\npcAXqu8e4MQkq4ELgW1V9WxVPQdsA9aNtBtJ0sgs6i6mJGuANwDfA6aqaneb9VNgqk2fCvxkYLWn\nW22++sHvsYH+kQdTU1PMzs4uZogH9Hq9Ra977Vn7FrX8sGMb1jA9jTt7mgz2NDl6vd7ItrXggEiy\nCvga8KGq+nmSA/OqqpLUKAZUVZuATQDT09M1MzMz1HZmZ2dZ7LpXDtzCuhA7L1/c9pdqmJ7GnT1N\nBnuaHKMMvQXdxZTkWPrh8KWq+norP9NOHdGe97T6LuD0gdVPa7X56pKkMbSQu5gC3Aw8VlWfHpi1\nFdh/J9J64BsD9Sva3UznA3vbqag7gQuSnNQuTl/QapKkMbSQU0y/B7wPeDjJ9lb7E+AG4NYkVwE/\nBt7T5t0BXAzMAb8C3g9QVc8m+QRwX1vu41X17Ei6kCSN3GEDoqq+C2Se2W/rWL6Aq+fZ1mZg82IG\nKElaHn6SWpLUyYCQJHUyICRJnQwISVInA0KS1MmAkCR1MiAkSZ0MCElSJwNCktTJgJAkdTIgJEmd\nDAhJUicDQpLUyYCQJHVa1G9S6/+3ZuAnSnfe8I5lHIkkjZ5HEJKkTgaEJKmTASFJ6mRASJI6HTYg\nkmxOsifJIwO1jybZlWR7e1w8MO8jSeaSPJ7kwoH6ulabS7Jx9K1IkkZpIUcQnwfWddQ/U1Vnt8cd\nAEnOBC4DXtvW+YskxyQ5BrgRuAg4E3hvW1aSNKYOe5trVX0nyZoFbu9SYEtVvQA8lWQOOLfNm6uq\nJwGSbGnLPrroEUuSjoqlfA7imiRXAPcD11bVc8CpwD0DyzzdagA/Oah+XtdGk2wANgBMTU0xOzs7\n1OB6vd6i1732rH1DvRcw9DgXY5iexp09TQZ7mhy9Xm9k2xo2IG4CPgFUe/4U8IFRDKiqNgGbAKan\np2tmZmao7czOzrLYda8c+ODbYu28fHHvNYxhehp39jQZ7GlyjDL0hgqIqnpm/3SSzwG3t5e7gNMH\nFj2t1ThEXZI0hoa6zTXJ6oGX7wL23+G0FbgsyfFJzgDWAvcC9wFrk5yR5Dj6F7K3Dj9sSdKRdtgj\niCRfBmaAU5I8DVwHzCQ5m/4ppp3AHwJU1Y4kt9K/+LwPuLqqft22cw1wJ3AMsLmqdoy8G0nSyCzk\nLqb3dpRvPsTy1wPXd9TvAO5Y1OgkScvGT1JLkjoZEJKkTgaEJKmTASFJ6mRASJI6+ZOjI+LPj0pa\naTyCkCR1MiAkSZ0MCElSJwNCktTJgJAkdTIgJEmdDAhJUicDQpLU6UX/Qbk1S/iZUUlayTyCkCR1\nMiAkSZ1elKeYPK0kSYfnEYQkqZMBIUnqdNiASLI5yZ4kjwzUTk6yLckT7fmkVk+SzyaZS/JQknMG\n1lnfln8iyfoj044kaVQWcgTxeWDdQbWNwF1VtRa4q70GuAhY2x4bgJugHyjAdcB5wLnAdftDRZI0\nng57kbqqvpNkzUHlS4GZNn0LMAt8uNW/UFUF3JPkxCSr27LbqupZgCTb6IfOl5fcwRjyx4MkrQTD\n3sU0VVW72/RPgak2fSrwk4Hlnm61+er/QJIN9I8+mJqaYnZ2dqgB9nq9ede99qx9Q21zGMOOv8uh\neppU9jQZ7Gly9Hq9kW1rybe5VlUlqVEMpm1vE7AJYHp6umZmZobazuzsLPOte+VRvM115+XdYxjG\noXqaVPY0Gexpcowy9Ia9i+mZduqI9ryn1XcBpw8sd1qrzVeXJI2pYQNiK7D/TqT1wDcG6le0u5nO\nB/a2U1F3AhckOaldnL6g1SRJY+qwp5iSfJn+ReZTkjxN/26kG4Bbk1wF/Bh4T1v8DuBiYA74FfB+\ngKp6NskngPvach/ff8FakjSeFnIX03vnmfW2jmULuHqe7WwGNi9qdJKkZeMnqSVJnV6UX9Z3NPmZ\nCEmTyiMISVInA0KS1MlTTEeRp5skTRKPICRJnQwISVInA0KS1MmAkCR1MiAkSZ0MCElSJwNCktTJ\nz0EsEz8TIWncvWgCYs1R/BU5SVoJPMUkSepkQEiSOhkQkqROBoQkqZMBIUnqtKSASLIzycNJtie5\nv9VOTrItyRPt+aRWT5LPJplL8lCSc0bRgCTpyBjFEcRbqursqppurzcCd1XVWuCu9hrgImBte2wA\nbhrBe0uSjpAjcYrpUuCWNn0L8M6B+heq7x7gxCSrj8D7S5JGIFU1/MrJU8BzQAH/pao2JXm+qk5s\n8wM8V1UnJrkduKGqvtvm3QV8uKruP2ibG+gfYTA1NfXGLVu2DDW2Xq/HqlWrDrx+eNfeobZztJ11\n6ivnnXdwTyuBPU0Ge5ocvV6PSy655IGBszpDW+onqd9cVbuS/CNgW5IfDM6sqkqyqASqqk3AJoDp\n6emamZkZamCzs7MMrnvlhHySeuflM/POO7inlcCeJoM9TY7Z2dmRbWtJAVFVu9rzniS3AecCzyRZ\nXVW72ymkPW3xXcDpA6uf1moaMN9Xgvh9TZKOtqGvQSQ5Ickr9k8DFwCPAFuB9W2x9cA32vRW4Ip2\nN9P5wN6q2j30yCVJR9RSjiCmgNv6lxl4KfDXVfU/ktwH3JrkKuDHwHva8ncAFwNzwK+A9y/hvSVJ\nR9jQAVFVTwKv76j/DHhbR72Aq4d9P0nS0eUnqSVJnQwISVInA2JCrNn4TR7etdcfPpJ01BgQkqRO\nBoQkqZMBIUnqtNSv2tAyONR1CD9xLWlUPIKQJHXyCGKFGTy68GhC0lIYECuYYSFpKTzFJEnqZEBI\nkjp5iulFyFNPkhbCgHiR8Cs6JC2WAfEi59GEpPkYEDrAsJA0yIBQJ8NCkgGhw5ovLAwRaWUzILQo\nC7nYbXBIK4MBoZGYLzgWEhYGijSejnpAJFkH/CfgGOC/VtUNR+q9vLVzvHTtj2vP2sd8f4aGi7S8\njmpAJDkGuBF4O/A0cF+SrVX16NEch8bXQo5EFrvuoRgq0vyO9hHEucBcVT0JkGQLcClgQGhZzBcq\nBod09APiVOAnA6+fBs4bXCDJBmBDe9lL8viQ73UK8HdDrjuW/rU9HTX55JJWH8uelsieJscpwD8Z\nxYbG7iJ1VW0CNi11O0nur6rpEQxpbNjTZLCnybASe4IDfa0ZxbaO9re57gJOH3h9WqtJksbM0Q6I\n+4C1Sc5IchxwGbD1KI9BkrQAR/UUU1XtS3INcCf921w3V9WOI/R2Sz5NNYbsaTLY02RYiT3BCPtK\nVY1qW5KkFcRflJMkdTIgJEmdVlxAJFmX5PEkc0k2Lvd4FiPJziQPJ9me5P5WOznJtiRPtOeTWj1J\nPtv6fCjJOcs7+t9IsjnJniSPDNQW3UeS9W35J5KsX45eBsbS1dNHk+xq+2t7kosH5n2k9fR4kgsH\n6mPx95nk9CR3J3k0yY4kH2z1Sd9P8/U1yfvqZUnuTfJg6+ljrX5Gku+18X2l3fhDkuPb67k2f83A\ntjp7nVdVrZgH/QvfPwJeDRwHPAicudzjWsT4dwKnHFT7j8DGNr0R+GSbvhj4FhDgfOB7yz3+gTH/\nPnAO8MiwfQAnA0+255Pa9Elj1tNHgX/bseyZ7W/veOCM9jd5zDj9fQKrgXPa9CuAH7ZxT/p+mq+v\nSd5XAVa16WOB77V9cCtwWav/JfBHbfqPgb9s05cBXzlUr4d675V2BHHgqzyq6u+B/V/lMckuBW5p\n07cA7xyof6H67gFOTLJ6OQZ4sKr6DvDsQeXF9nEhsK2qnq2q54BtwLojP/pu8/Q0n0uBLVX1QlU9\nBczR/9scm7/PqtpdVd9v078AHqP/TQeTvp/m62s+k7Cvqqp67eWx7VHAW4GvtvrB+2r/Pvwq8LYk\nYf5e57XSAqLrqzwO9ccxbgr4n0keSP8rRwCmqmp3m/4pMNWmJ63XxfYxKf1d0065bN5/OoYJ66md\ngngD/f8zXTH76aC+YIL3VZJjkmwH9tAP4R8Bz1fVvo7xHRh7m78X+G2G6GmlBcSke3NVnQNcBFyd\n5PcHZ1b/OHHi70teKX0ANwH/FDgb2A18anmHs3hJVgFfAz5UVT8fnDfJ+6mjr4neV1X166o6m/63\nT5wL/O7ReN+VFhAT/VUeVbWrPe8BbqP/h/DM/lNH7XlPW3zSel1sH2PfX1U90/7F/b/A5/jN4fpE\n9JTkWPr/Ef1SVX29lSd+P3X1Nen7ar+qeh64G3gT/dN8+z/sPDi+A2Nv818J/IwhelppATGxX+WR\n5IQkr9g/DVwAPEJ//PvvDFkPfKNNbwWuaHeXnA/sHTg1MI4W28edwAVJTmqnAy5otbFx0DWfd9Hf\nX9Dv6bJ2N8kZwFrgXsbo77Odk74ZeKyqPj0wa6L303x9Tfi+elWSE9v0y+n/ns5j9IPi3W2xg/fV\n/n34buDb7Whwvl7ntxxX5Y/kg/7dFj+kf47uT5d7PIsY96vp32HwILBj/9jpnzu8C3gC+Bvg5PrN\nnQ03tj4fBqaXu4eBXr5M/zD+/9A/z3nVMH0AH6B/IW0OeP8Y9vTFNuaH2r98qweW/9PW0+PAReP2\n9wm8mf7po4eA7e1x8QrYT/P1Ncn76p8Bf9vG/gjwH1r91fT/Az8H/Hfg+FZ/WXs91+a/+nC9zvfw\nqzYkSZ1W2ikmSdKIGBCSpE4GhCSpkwEhSepkQEiSOhkQkqROBoQkqdP/AynDVUfNtU3nAAAAAElF\nTkSuQmCC\n",
            "text/plain": [
              "<Figure size 432x288 with 1 Axes>"
            ]
          },
          "metadata": {
            "tags": []
          }
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "olroXq3_WiQt",
        "colab_type": "text"
      },
      "source": [
        "# The Model (finally)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ccp5trMwRtmr",
        "colab_type": "text"
      },
      "source": [
        "\n",
        "Now let's create a classification model using [adapter-BERT](https//arxiv.org/abs/1902.00751), which is a clever way \n",
        "of reducing the trainable parameter count, by freezing the original BERT weights, \n",
        "and adapting the internal activations with two FFN bottlenecks (i.e. `adapter_size` bellow) in every BERT layer. \n",
        "\n",
        "For sequence classification BERT proposes a classifier acting on the `[CLS]` output alone. Such a classifier overfits easily and is difficult to regularize. \n",
        "As an alternative we'll be using max pooling on the complete output sequence.  For regularization we rely entirely upon layer normalization and the selection of small sizes for `adapter_size` and the classifier layers (and one cycle learning policy with a big learning rate).\n",
        "\n",
        "(The intuition being, that the `[CLS]` output during pre-training is never trained to capture global sequence representations, and would therefore need more adaption for finding the optimal `[CLS]` representation during fine-tunning, while at the same time this would overfit all the other activations.)\n"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "6o2a5ZIvRcJq",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "def create_model(max_seq_len, \n",
        "                 adapter_size=64,\n",
        "                 batch_size=None,\n",
        "                 init_ckpt_file=None,\n",
        "                 init_bert_ckpt_file=bert_ckpt_file,\n",
        "                ):\n",
        "  \"\"\"Creates a classification model.\n",
        "  :param adapter_size: adapter bottleneck size - arXiv:1902.00751\n",
        "  \"\"\"\n",
        "\n",
        "  bert_params = params_from_pretrained_ckpt(os.path.dirname(init_bert_ckpt_file))\n",
        "  \n",
        "  # create the bert layer\n",
        "  bert_params.adapter_size = adapter_size\n",
        "  bert_params.adapter_init_scale = 1e-5\n",
        "  l_bert = BertModelLayer.from_params(bert_params, name=\"bert\")\n",
        "\n",
        "  max_pooling = True\n",
        "  if max_pooling:\n",
        "    model = keras.models.Sequential([\n",
        "        keras.layers.InputLayer(input_shape=(max_seq_len,),\n",
        "                                batch_size=batch_size,\n",
        "                                dtype=\"int32\", name=\"input_ids\"),\n",
        "        l_bert,\n",
        "\n",
        "        #keras.layers.TimeDistributed(keras.layers.Dropout(0.1)),\n",
        "        keras.layers.TimeDistributed(keras.layers.Dense(bert_params.hidden_size//32)),\n",
        "        keras.layers.TimeDistributed(keras.layers.LayerNormalization()),\n",
        "        keras.layers.TimeDistributed(keras.layers.Activation(\"tanh\")),\n",
        "\n",
        "        pf.Concat([\n",
        "          keras.layers.Lambda(lambda x: tf.math.reduce_max(x, axis=1)),  # GlobalMaxPooling1D   \n",
        "          keras.layers.GlobalAveragePooling1D()\n",
        "        ]),\n",
        "\n",
        "        #keras.layers.Dropout(0.5),\n",
        "        keras.layers.Dense(units=bert_params.hidden_size//16),\n",
        "        keras.layers.LayerNormalization(),\n",
        "        keras.layers.Activation(\"tanh\"),\n",
        "\n",
        "        keras.layers.Dense(units=2)\n",
        "    ])\n",
        "  else:\n",
        "    model = keras.models.Sequential([\n",
        "        keras.layers.InputLayer(input_shape=(max_seq_len,),\n",
        "                                batch_size=batch_size,\n",
        "                                dtype=\"int32\", name=\"input_ids\"),\n",
        "        l_bert,\n",
        "        keras.layers.Lambda(lambda seq: seq[:, 0, :]),\n",
        "        keras.layers.Dense(units=bert_params.hidden_size),\n",
        "        keras.layers.Activation(\"tanh\"),\n",
        "        keras.layers.Dense(units=2)      \n",
        "    ])\n",
        "  \n",
        "  model.build(input_shape=(batch_size, max_seq_len))\n",
        "  \n",
        "  # freeze non-adapter-BERT layers for the case adapter_size is set\n",
        "  l_bert.apply_adapter_freeze()\n",
        "  l_bert.embeddings_layer.trainable=False # True for unfreezing emb LayerNorms \n",
        "  \n",
        "  # apply global regularization on all trainable dense layers\n",
        "  pf.utils.add_dense_layer_loss(model,\n",
        "                                kernel_regularizer=keras.regularizers.l2(0.01),\n",
        "                                bias_regularizer=keras.regularizers.l2(0.01))\n",
        "  \n",
        "  model.compile(optimizer=pf.optimizers.RAdam(),\n",
        "                loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n",
        "                metrics=[keras.metrics.SparseCategoricalAccuracy(name=\"acc\")])\n",
        "\n",
        "  # load the pre-trained model weights (once the input_shape is known)\n",
        "  if init_ckpt_file:\n",
        "    print(\"Loading model weights from:\", init_ckpt_file)\n",
        "    model.load_weights(init_ckpt_file)\n",
        "  elif init_bert_ckpt_file:\n",
        "    print(\"Loading pre-trained BERT layer from:\", init_bert_ckpt_file)\n",
        "    load_stock_weights(l_bert, init_bert_ckpt_file)\n",
        "\n",
        "      \n",
        "  return model\n"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "fMEsolbw4Xo-",
        "colab_type": "text"
      },
      "source": [
        "Bigger `max_seq_len` in a transformer model slows things quadratically, \n",
        "but being equipted with a free TPU, we'd go for the maximum sequence length, which in BERT is 512. We are chossing a realy small `adapter_size` and a rather huge learning rate, both of which are needed for regularization:"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "9cyROAuVljmp",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "adapter_size = 4 #24\n",
        "max_seq_len  = 512\n",
        "batch_size   = 128"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "kTy5V6Js5FTo",
        "colab_type": "text"
      },
      "source": [
        "So we are now finally ready to create our model. To make it run on a TPU, we need to wrap it's creation into a `TPUStrategy` scope:"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "mAXDrezB0ijW",
        "colab_type": "code",
        "outputId": "bb2783f0-4d28-40a9-a267-c8a28f7020be",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 1000
        }
      },
      "source": [
        "assert not tf.executing_eagerly()\n",
        "\n",
        "cluster_resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu=TPU_WORKER)\n",
        "tf.tpu.experimental.initialize_tpu_system(cluster_resolver)\n",
        "tpu_strategy = tf.distribute.experimental.TPUStrategy(cluster_resolver)\n",
        "\n",
        "with tpu_strategy.scope():\n",
        "  model = create_model(max_seq_len,\n",
        "                       adapter_size, \n",
        "                       batch_size=batch_size,\n",
        "                       init_bert_ckpt_file=bert_ckpt_file)\n",
        "\n",
        "model.summary()"
      ],
      "execution_count": 20,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "WARNING: Logging before flag parsing goes to stderr.\n",
            "W0904 15:43:13.407160 140482202118016 deprecation.py:506] From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/init_ops.py:1251: calling VarianceScaling.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.\n",
            "Instructions for updating:\n",
            "Call initializer instance with the dtype argument instead of passing it to the constructor\n",
            "W0904 15:43:13.514033 140482202118016 deprecation.py:506] From /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/initializers.py:119: calling RandomUniform.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.\n",
            "Instructions for updating:\n",
            "Call initializer instance with the dtype argument instead of passing it to the constructor\n"
          ],
          "name": "stderr"
        },
        {
          "output_type": "stream",
          "text": [
            "Loading pre-trained BERT layer from: gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt\n",
            "loader: No value for:[bert/encoder/layer_0/attention/output/adapter-down/kernel:0], i.e.:[bert/encoder/layer_0/attention/output/adapter-down/kernel] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_0/attention/output/adapter-down/bias:0], i.e.:[bert/encoder/layer_0/attention/output/adapter-down/bias] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_0/attention/output/adapter-up/kernel:0], i.e.:[bert/encoder/layer_0/attention/output/adapter-up/kernel] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_0/attention/output/adapter-up/bias:0], i.e.:[bert/encoder/layer_0/attention/output/adapter-up/bias] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_0/output/adapter-down/kernel:0], i.e.:[bert/encoder/layer_0/output/adapter-down/kernel] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_0/output/adapter-down/bias:0], i.e.:[bert/encoder/layer_0/output/adapter-down/bias] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_0/output/adapter-up/kernel:0], i.e.:[bert/encoder/layer_0/output/adapter-up/kernel] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_0/output/adapter-up/bias:0], i.e.:[bert/encoder/layer_0/output/adapter-up/bias] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_1/attention/output/adapter-down/kernel:0], i.e.:[bert/encoder/layer_1/attention/output/adapter-down/kernel] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_1/attention/output/adapter-down/bias:0], i.e.:[bert/encoder/layer_1/attention/output/adapter-down/bias] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_1/attention/output/adapter-up/kernel:0], i.e.:[bert/encoder/layer_1/attention/output/adapter-up/kernel] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_1/attention/output/adapter-up/bias:0], i.e.:[bert/encoder/layer_1/attention/output/adapter-up/bias] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_1/output/adapter-down/kernel:0], i.e.:[bert/encoder/layer_1/output/adapter-down/kernel] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_1/output/adapter-down/bias:0], i.e.:[bert/encoder/layer_1/output/adapter-down/bias] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_1/output/adapter-up/kernel:0], i.e.:[bert/encoder/layer_1/output/adapter-up/kernel] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_1/output/adapter-up/bias:0], i.e.:[bert/encoder/layer_1/output/adapter-up/bias] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_2/attention/output/adapter-down/kernel:0], i.e.:[bert/encoder/layer_2/attention/output/adapter-down/kernel] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_2/attention/output/adapter-down/bias:0], i.e.:[bert/encoder/layer_2/attention/output/adapter-down/bias] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_2/attention/output/adapter-up/kernel:0], i.e.:[bert/encoder/layer_2/attention/output/adapter-up/kernel] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_2/attention/output/adapter-up/bias:0], i.e.:[bert/encoder/layer_2/attention/output/adapter-up/bias] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_2/output/adapter-down/kernel:0], i.e.:[bert/encoder/layer_2/output/adapter-down/kernel] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_2/output/adapter-down/bias:0], i.e.:[bert/encoder/layer_2/output/adapter-down/bias] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_2/output/adapter-up/kernel:0], i.e.:[bert/encoder/layer_2/output/adapter-up/kernel] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_2/output/adapter-up/bias:0], i.e.:[bert/encoder/layer_2/output/adapter-up/bias] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_3/attention/output/adapter-down/kernel:0], i.e.:[bert/encoder/layer_3/attention/output/adapter-down/kernel] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_3/attention/output/adapter-down/bias:0], i.e.:[bert/encoder/layer_3/attention/output/adapter-down/bias] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_3/attention/output/adapter-up/kernel:0], i.e.:[bert/encoder/layer_3/attention/output/adapter-up/kernel] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_3/attention/output/adapter-up/bias:0], i.e.:[bert/encoder/layer_3/attention/output/adapter-up/bias] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_3/output/adapter-down/kernel:0], i.e.:[bert/encoder/layer_3/output/adapter-down/kernel] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_3/output/adapter-down/bias:0], i.e.:[bert/encoder/layer_3/output/adapter-down/bias] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_3/output/adapter-up/kernel:0], i.e.:[bert/encoder/layer_3/output/adapter-up/kernel] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_3/output/adapter-up/bias:0], i.e.:[bert/encoder/layer_3/output/adapter-up/bias] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_4/attention/output/adapter-down/kernel:0], i.e.:[bert/encoder/layer_4/attention/output/adapter-down/kernel] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_4/attention/output/adapter-down/bias:0], i.e.:[bert/encoder/layer_4/attention/output/adapter-down/bias] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_4/attention/output/adapter-up/kernel:0], i.e.:[bert/encoder/layer_4/attention/output/adapter-up/kernel] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_4/attention/output/adapter-up/bias:0], i.e.:[bert/encoder/layer_4/attention/output/adapter-up/bias] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_4/output/adapter-down/kernel:0], i.e.:[bert/encoder/layer_4/output/adapter-down/kernel] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_4/output/adapter-down/bias:0], i.e.:[bert/encoder/layer_4/output/adapter-down/bias] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_4/output/adapter-up/kernel:0], i.e.:[bert/encoder/layer_4/output/adapter-up/kernel] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_4/output/adapter-up/bias:0], i.e.:[bert/encoder/layer_4/output/adapter-up/bias] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_5/attention/output/adapter-down/kernel:0], i.e.:[bert/encoder/layer_5/attention/output/adapter-down/kernel] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_5/attention/output/adapter-down/bias:0], i.e.:[bert/encoder/layer_5/attention/output/adapter-down/bias] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_5/attention/output/adapter-up/kernel:0], i.e.:[bert/encoder/layer_5/attention/output/adapter-up/kernel] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_5/attention/output/adapter-up/bias:0], i.e.:[bert/encoder/layer_5/attention/output/adapter-up/bias] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_5/output/adapter-down/kernel:0], i.e.:[bert/encoder/layer_5/output/adapter-down/kernel] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_5/output/adapter-down/bias:0], i.e.:[bert/encoder/layer_5/output/adapter-down/bias] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_5/output/adapter-up/kernel:0], i.e.:[bert/encoder/layer_5/output/adapter-up/kernel] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_5/output/adapter-up/bias:0], i.e.:[bert/encoder/layer_5/output/adapter-up/bias] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_6/attention/output/adapter-down/kernel:0], i.e.:[bert/encoder/layer_6/attention/output/adapter-down/kernel] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_6/attention/output/adapter-down/bias:0], i.e.:[bert/encoder/layer_6/attention/output/adapter-down/bias] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_6/attention/output/adapter-up/kernel:0], i.e.:[bert/encoder/layer_6/attention/output/adapter-up/kernel] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_6/attention/output/adapter-up/bias:0], i.e.:[bert/encoder/layer_6/attention/output/adapter-up/bias] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_6/output/adapter-down/kernel:0], i.e.:[bert/encoder/layer_6/output/adapter-down/kernel] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_6/output/adapter-down/bias:0], i.e.:[bert/encoder/layer_6/output/adapter-down/bias] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_6/output/adapter-up/kernel:0], i.e.:[bert/encoder/layer_6/output/adapter-up/kernel] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_6/output/adapter-up/bias:0], i.e.:[bert/encoder/layer_6/output/adapter-up/bias] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_7/attention/output/adapter-down/kernel:0], i.e.:[bert/encoder/layer_7/attention/output/adapter-down/kernel] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_7/attention/output/adapter-down/bias:0], i.e.:[bert/encoder/layer_7/attention/output/adapter-down/bias] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_7/attention/output/adapter-up/kernel:0], i.e.:[bert/encoder/layer_7/attention/output/adapter-up/kernel] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_7/attention/output/adapter-up/bias:0], i.e.:[bert/encoder/layer_7/attention/output/adapter-up/bias] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_7/output/adapter-down/kernel:0], i.e.:[bert/encoder/layer_7/output/adapter-down/kernel] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_7/output/adapter-down/bias:0], i.e.:[bert/encoder/layer_7/output/adapter-down/bias] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_7/output/adapter-up/kernel:0], i.e.:[bert/encoder/layer_7/output/adapter-up/kernel] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_7/output/adapter-up/bias:0], i.e.:[bert/encoder/layer_7/output/adapter-up/bias] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_8/attention/output/adapter-down/kernel:0], i.e.:[bert/encoder/layer_8/attention/output/adapter-down/kernel] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_8/attention/output/adapter-down/bias:0], i.e.:[bert/encoder/layer_8/attention/output/adapter-down/bias] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_8/attention/output/adapter-up/kernel:0], i.e.:[bert/encoder/layer_8/attention/output/adapter-up/kernel] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_8/attention/output/adapter-up/bias:0], i.e.:[bert/encoder/layer_8/attention/output/adapter-up/bias] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_8/output/adapter-down/kernel:0], i.e.:[bert/encoder/layer_8/output/adapter-down/kernel] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_8/output/adapter-down/bias:0], i.e.:[bert/encoder/layer_8/output/adapter-down/bias] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_8/output/adapter-up/kernel:0], i.e.:[bert/encoder/layer_8/output/adapter-up/kernel] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_8/output/adapter-up/bias:0], i.e.:[bert/encoder/layer_8/output/adapter-up/bias] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_9/attention/output/adapter-down/kernel:0], i.e.:[bert/encoder/layer_9/attention/output/adapter-down/kernel] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_9/attention/output/adapter-down/bias:0], i.e.:[bert/encoder/layer_9/attention/output/adapter-down/bias] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_9/attention/output/adapter-up/kernel:0], i.e.:[bert/encoder/layer_9/attention/output/adapter-up/kernel] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_9/attention/output/adapter-up/bias:0], i.e.:[bert/encoder/layer_9/attention/output/adapter-up/bias] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_9/output/adapter-down/kernel:0], i.e.:[bert/encoder/layer_9/output/adapter-down/kernel] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_9/output/adapter-down/bias:0], i.e.:[bert/encoder/layer_9/output/adapter-down/bias] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_9/output/adapter-up/kernel:0], i.e.:[bert/encoder/layer_9/output/adapter-up/kernel] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_9/output/adapter-up/bias:0], i.e.:[bert/encoder/layer_9/output/adapter-up/bias] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_10/attention/output/adapter-down/kernel:0], i.e.:[bert/encoder/layer_10/attention/output/adapter-down/kernel] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_10/attention/output/adapter-down/bias:0], i.e.:[bert/encoder/layer_10/attention/output/adapter-down/bias] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_10/attention/output/adapter-up/kernel:0], i.e.:[bert/encoder/layer_10/attention/output/adapter-up/kernel] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_10/attention/output/adapter-up/bias:0], i.e.:[bert/encoder/layer_10/attention/output/adapter-up/bias] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_10/output/adapter-down/kernel:0], i.e.:[bert/encoder/layer_10/output/adapter-down/kernel] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_10/output/adapter-down/bias:0], i.e.:[bert/encoder/layer_10/output/adapter-down/bias] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_10/output/adapter-up/kernel:0], i.e.:[bert/encoder/layer_10/output/adapter-up/kernel] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_10/output/adapter-up/bias:0], i.e.:[bert/encoder/layer_10/output/adapter-up/bias] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_11/attention/output/adapter-down/kernel:0], i.e.:[bert/encoder/layer_11/attention/output/adapter-down/kernel] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_11/attention/output/adapter-down/bias:0], i.e.:[bert/encoder/layer_11/attention/output/adapter-down/bias] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_11/attention/output/adapter-up/kernel:0], i.e.:[bert/encoder/layer_11/attention/output/adapter-up/kernel] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_11/attention/output/adapter-up/bias:0], i.e.:[bert/encoder/layer_11/attention/output/adapter-up/bias] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_11/output/adapter-down/kernel:0], i.e.:[bert/encoder/layer_11/output/adapter-down/kernel] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_11/output/adapter-down/bias:0], i.e.:[bert/encoder/layer_11/output/adapter-down/bias] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_11/output/adapter-up/kernel:0], i.e.:[bert/encoder/layer_11/output/adapter-up/kernel] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "loader: No value for:[bert/encoder/layer_11/output/adapter-up/bias:0], i.e.:[bert/encoder/layer_11/output/adapter-up/bias] in:[gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt]\n",
            "Done loading 196 BERT weights from: gs://kpe/colab/movie_review/bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt into <bert.model.BertModelLayer object at 0x7fc45677bcc0> (prefix:bert). Count of weights not found in the checkpoint was: [96]. Count of weights with mismatched shape: [0]\n",
            "Model: \"sequential\"\n",
            "_________________________________________________________________\n",
            "Layer (type)                 Output Shape              Param #   \n",
            "=================================================================\n",
            "bert (BertModelLayer)        (16, 512, 768)            109056096 \n",
            "_________________________________________________________________\n",
            "time_distributed (TimeDistri (16, 512, 24)             18456     \n",
            "_________________________________________________________________\n",
            "time_distributed_1 (TimeDist (16, 512, 24)             48        \n",
            "_________________________________________________________________\n",
            "time_distributed_2 (TimeDist (16, 512, 24)             0         \n",
            "_________________________________________________________________\n",
            "concat (Concat)              (16, 48)                  0         \n",
            "_________________________________________________________________\n",
            "dense_1 (Dense)              (16, 48)                  2352      \n",
            "_________________________________________________________________\n",
            "layer_normalization_1 (Layer (16, 48)                  96        \n",
            "_________________________________________________________________\n",
            "activation_1 (Activation)    (16, 48)                  0         \n",
            "_________________________________________________________________\n",
            "dense_2 (Dense)              (16, 2)                   98        \n",
            "=================================================================\n",
            "Total params: 109,077,146\n",
            "Trainable params: 223,898\n",
            "Non-trainable params: 108,853,248\n",
            "_________________________________________________________________\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "SLmgCxwjacne",
        "colab_type": "text"
      },
      "source": [
        "Here we use `drop_remainder=True` for making sure the `batch_size` is fixed:"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "P8yFOEiIkswR",
        "colab_type": "code",
        "outputId": "03c58351-c4d4-473b-d57b-d61f49300a2a",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 89
        }
      },
      "source": [
        "\n",
        "ds = tfrecord_to_dataset([train_tfrecord_file])\n",
        "ds = ds.map(create_pad_example_fn(pad_len=max_seq_len))\n",
        "ds = ds.cache()\n",
        "ds = ds.shuffle(buffer_size=25000, seed=4711, reshuffle_each_iteration=True)\n",
        "ds = ds.repeat()\n",
        "\n",
        "ds = ds.batch(batch_size, drop_remainder=True)\n",
        "train_ds = ds"
      ],
      "execution_count": 21,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "W0904 15:44:42.516669 140482202118016 deprecation.py:323] From /usr/local/lib/python3.6/dist-packages/tensorflow/python/data/util/random_seed.py:58: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.\n",
            "Instructions for updating:\n",
            "Use tf.where in 2.0, which has the same broadcast rule as np.where\n"
          ],
          "name": "stderr"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "5jFBFOOK46iF",
        "colab_type": "text"
      },
      "source": [
        "Once the model is trained, we'll store it in our bucket:"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "XrQ6Zh2PigjO",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "trained_ckpt_file = os.path.join(OUTPUT_DIR, 'checkpoints','trained','movie_reviews.ckpt')"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "chq22MBJ5DEw",
        "colab_type": "text"
      },
      "source": [
        "and now lets proceed with the actual training on the TPU. We'll be using one cycle learning policy with a huge learning rate (which also works as a regularizer)."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "ZuLOkwonF-9S",
        "colab_type": "code",
        "outputId": "4c4d660f-8056-4389-9d61-73ebbfea1531",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 1000
        }
      },
      "source": [
        "#%%time\n",
        "\n",
        "if tf.io.gfile.exists(trained_ckpt_file):\n",
        "  model.load_weights(trained_ckpt_file)\n",
        "else:\n",
        "  log_dir = os.path.join(OUTPUT_DIR, \"log\", datetime.datetime.now().strftime(\"%Y%m%d-%H%M%s\"))\n",
        "  tensorboard_callback = keras.callbacks.TensorBoard(log_dir=log_dir)\n",
        "\n",
        "  #total_epoch_count = 30\n",
        "  #lr_scheduler = pf.utils.create_one_cycle_lr_scheduler(max_learn_rate=2e-3,\n",
        "  #                                                      end_learn_rate=1e-6,\n",
        "  #                                                      warmup_epoch_count=25,\n",
        "  #                                                      total_epoch_count=total_epoch_count)\n",
        "  \n",
        "  total_epoch_count = 30\n",
        "  lr_scheduler = pf.utils.create_one_cycle_lr_scheduler(max_learn_rate=5e-3,\n",
        "                                                        end_learn_rate=1e-6,\n",
        "                                                        warmup_epoch_count=20,\n",
        "                                                        total_epoch_count=total_epoch_count)\n",
        "\n",
        "  model.fit(train_ds,\n",
        "            shuffle=True,\n",
        "            epochs=total_epoch_count,\n",
        "            steps_per_epoch=25000//batch_size,\n",
        "            callbacks=[lr_scheduler,\n",
        "                       keras.callbacks.EarlyStopping(patience=10, \n",
        "                                                     restore_best_weights=True, \n",
        "                                                     monitor='loss'), # TODO: validation on a TPU - how?\n",
        "                       tensorboard_callback])\n",
        "  model.save_weights(trained_ckpt_file, overwrite=True)"
      ],
      "execution_count": 23,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "W0904 15:46:13.160180 140482202118016 deprecation.py:323] From /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training_distributed.py:411: Variable.load (from tensorflow.python.ops.variables) is deprecated and will be removed in a future version.\n",
            "Instructions for updating:\n",
            "Prefer Variable.assign which has equivalent behavior in 2.X.\n"
          ],
          "name": "stderr"
        },
        {
          "output_type": "stream",
          "text": [
            "\n",
            "Epoch 00001: LearningRateScheduler reducing learning rate to 0.0001.\n",
            "Epoch 1/30\n",
            "  2/195 [..............................] - ETA: 1:19:17 - loss: 5.7554 - acc: 0.4688"
          ],
          "name": "stdout"
        },
        {
          "output_type": "stream",
          "text": [
            "W0904 15:47:02.460344 140482202118016 callbacks.py:257] Method (on_train_batch_end) is slow compared to the batch update (0.997378). Check your callbacks.\n"
          ],
          "name": "stderr"
        },
        {
          "output_type": "stream",
          "text": [
            "195/195 [==============================] - 126s 646ms/step - loss: 5.5815 - acc: 0.5403\n",
            "\n",
            "Epoch 00002: LearningRateScheduler reducing learning rate to 0.0002.\n",
            "Epoch 2/30\n",
            "195/195 [==============================] - 74s 381ms/step - loss: 2.9938 - acc: 0.8579\n",
            "\n",
            "Epoch 00003: LearningRateScheduler reducing learning rate to 0.00030000000000000003.\n",
            "Epoch 3/30\n",
            "195/195 [==============================] - 75s 385ms/step - loss: 2.0853 - acc: 0.8992\n",
            "\n",
            "Epoch 00004: LearningRateScheduler reducing learning rate to 0.0004.\n",
            "Epoch 4/30\n",
            "195/195 [==============================] - 74s 381ms/step - loss: 1.8854 - acc: 0.9113\n",
            "\n",
            "Epoch 00005: LearningRateScheduler reducing learning rate to 0.0005.\n",
            "Epoch 5/30\n",
            "195/195 [==============================] - 75s 385ms/step - loss: 1.7387 - acc: 0.9194\n",
            "\n",
            "Epoch 00006: LearningRateScheduler reducing learning rate to 0.0006000000000000001.\n",
            "Epoch 6/30\n",
            "195/195 [==============================] - 74s 381ms/step - loss: 1.6411 - acc: 0.9221\n",
            "\n",
            "Epoch 00007: LearningRateScheduler reducing learning rate to 0.0007.\n",
            "Epoch 7/30\n",
            "195/195 [==============================] - 75s 385ms/step - loss: 1.5480 - acc: 0.9275\n",
            "\n",
            "Epoch 00008: LearningRateScheduler reducing learning rate to 0.0008.\n",
            "Epoch 8/30\n",
            "195/195 [==============================] - 74s 381ms/step - loss: 1.4164 - acc: 0.9339\n",
            "\n",
            "Epoch 00009: LearningRateScheduler reducing learning rate to 0.0009000000000000001.\n",
            "Epoch 9/30\n",
            "195/195 [==============================] - 75s 386ms/step - loss: 1.4130 - acc: 0.9357\n",
            "\n",
            "Epoch 00010: LearningRateScheduler reducing learning rate to 0.001.\n",
            "Epoch 10/30\n",
            "195/195 [==============================] - 74s 382ms/step - loss: 1.3292 - acc: 0.9389\n",
            "\n",
            "Epoch 00011: LearningRateScheduler reducing learning rate to 0.0011.\n",
            "Epoch 11/30\n",
            "195/195 [==============================] - 75s 385ms/step - loss: 1.2213 - acc: 0.9441\n",
            "\n",
            "Epoch 00012: LearningRateScheduler reducing learning rate to 0.0012000000000000001.\n",
            "Epoch 12/30\n",
            "195/195 [==============================] - 74s 381ms/step - loss: 1.1286 - acc: 0.9492\n",
            "\n",
            "Epoch 00013: LearningRateScheduler reducing learning rate to 0.0013000000000000002.\n",
            "Epoch 13/30\n",
            "195/195 [==============================] - 75s 385ms/step - loss: 1.0427 - acc: 0.9528\n",
            "\n",
            "Epoch 00014: LearningRateScheduler reducing learning rate to 0.0014.\n",
            "Epoch 14/30\n",
            "195/195 [==============================] - 74s 381ms/step - loss: 0.9836 - acc: 0.9569\n",
            "\n",
            "Epoch 00015: LearningRateScheduler reducing learning rate to 0.0015.\n",
            "Epoch 15/30\n",
            "195/195 [==============================] - 75s 385ms/step - loss: 0.9515 - acc: 0.9568\n",
            "\n",
            "Epoch 00016: LearningRateScheduler reducing learning rate to 0.0016.\n",
            "Epoch 16/30\n",
            "195/195 [==============================] - 74s 381ms/step - loss: 0.9155 - acc: 0.9597\n",
            "\n",
            "Epoch 00017: LearningRateScheduler reducing learning rate to 0.0017000000000000001.\n",
            "Epoch 17/30\n",
            "195/195 [==============================] - 74s 381ms/step - loss: 0.9490 - acc: 0.9563\n",
            "\n",
            "Epoch 00018: LearningRateScheduler reducing learning rate to 0.0018000000000000002.\n",
            "Epoch 18/30\n",
            "195/195 [==============================] - 74s 381ms/step - loss: 0.8301 - acc: 0.9630\n",
            "\n",
            "Epoch 00019: LearningRateScheduler reducing learning rate to 0.0019.\n",
            "Epoch 19/30\n",
            "195/195 [==============================] - 74s 382ms/step - loss: 0.8385 - acc: 0.9609\n",
            "\n",
            "Epoch 00020: LearningRateScheduler reducing learning rate to 0.002.\n",
            "Epoch 20/30\n",
            "195/195 [==============================] - 74s 381ms/step - loss: 0.8292 - acc: 0.9627\n",
            "\n",
            "Epoch 00021: LearningRateScheduler reducing learning rate to 0.002.\n",
            "Epoch 21/30\n",
            "195/195 [==============================] - 75s 386ms/step - loss: 0.7779 - acc: 0.9646\n",
            "\n",
            "Epoch 00022: LearningRateScheduler reducing learning rate to 0.0009352484478226213.\n",
            "Epoch 22/30\n",
            "195/195 [==============================] - 74s 381ms/step - loss: 0.5790 - acc: 0.9752\n",
            "\n",
            "Epoch 00023: LearningRateScheduler reducing learning rate to 0.0004373448295773112.\n",
            "Epoch 23/30\n",
            "195/195 [==============================] - 75s 386ms/step - loss: 0.3683 - acc: 0.9863\n",
            "\n",
            "Epoch 00024: LearningRateScheduler reducing learning rate to 0.00020451303651271454.\n",
            "Epoch 24/30\n",
            "195/195 [==============================] - 74s 381ms/step - loss: 0.3091 - acc: 0.9887\n",
            "\n",
            "Epoch 00025: LearningRateScheduler reducing learning rate to 9.563524997900369e-05.\n",
            "Epoch 25/30\n",
            "195/195 [==============================] - 75s 387ms/step - loss: 0.3024 - acc: 0.9888\n",
            "\n",
            "Epoch 00026: LearningRateScheduler reducing learning rate to 4.4721359549995795e-05.\n",
            "Epoch 26/30\n",
            "195/195 [==============================] - 74s 382ms/step - loss: 0.2698 - acc: 0.9906\n",
            "\n",
            "Epoch 00027: LearningRateScheduler reducing learning rate to 2.091279105182545e-05.\n",
            "Epoch 27/30\n",
            "195/195 [==============================] - 74s 381ms/step - loss: 0.2753 - acc: 0.9899\n",
            "\n",
            "Epoch 00028: LearningRateScheduler reducing learning rate to 9.77932768542928e-06.\n",
            "Epoch 28/30\n",
            "195/195 [==============================] - 74s 380ms/step - loss: 0.2639 - acc: 0.9914\n",
            "\n",
            "Epoch 00029: LearningRateScheduler reducing learning rate to 4.573050519273263e-06.\n",
            "Epoch 29/30\n",
            "195/195 [==============================] - 75s 384ms/step - loss: 0.2518 - acc: 0.9911\n",
            "\n",
            "Epoch 00030: LearningRateScheduler reducing learning rate to 2.138469199982376e-06.\n",
            "Epoch 30/30\n",
            "195/195 [==============================] - 73s 376ms/step - loss: 0.2552 - acc: 0.9912\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "E_Bbk6c_hPud",
        "colab_type": "text"
      },
      "source": [
        "# Evaluation\n",
        "\n",
        "To evaluate, we'd prepare a dataset:"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "dBkGzomBiWjG",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "\n",
        "def to_model_ds(tfrecord_file, batch_size=batch_size, drop_remainder=True):\n",
        "  ds = tfrecord_to_dataset([tfrecord_file])\n",
        "  ds = ds.map(create_pad_example_fn(pad_len=max_seq_len))\n",
        "  ds = ds.batch(batch_size, drop_remainder=drop_remainder)\n",
        "  return ds"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "loylsQ6V6FmA",
        "colab_type": "text"
      },
      "source": [
        "and call evaluate:"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "-VUk8f8KYzeI",
        "colab_type": "code",
        "outputId": "ca96bbc7-a30e-4329-d74b-434a2d1bbd80",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 121
        }
      },
      "source": [
        "\n",
        "_, train_acc = model.evaluate(to_model_ds(train_tfrecord_file), steps=25000//batch_size)\n",
        "_, test_acc = model.evaluate(to_model_ds(test_tfrecord_file), steps=25000//batch_size)\n",
        "\n",
        "print(\"train acc\", train_acc)\n",
        "print(\" test acc\", test_acc)"
      ],
      "execution_count": 27,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "195/195 [==============================] - 87s 446ms/step\n",
            "195/195 [==============================] - 87s 446ms/step\n",
            "195/195 [==============================] - 87s 448ms/step\n",
            "195/195 [==============================] - 87s 448ms/step\n",
            "train acc 0.99739593\n",
            " test acc 0.94158655\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "T9A53X8V5V8G",
        "colab_type": "text"
      },
      "source": [
        "We could also create a new model, load the trained checkpoint and evaluate. \n",
        "Beware however that this would take ages on a CPU, so you might prefer running the lines below in a new GPU session:"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "BSqMu64oHzqy",
        "colab_type": "code",
        "outputId": "18cc192e-422e-49f5-aa55-37a5423db061",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 87
        }
      },
      "source": [
        "%%time \n",
        "\n",
        "trained_ckpt_file = os.path.join(OUTPUT_DIR, 'checkpoints','trained','movie_reviews.ckpt')\n",
        "\n",
        "#model = create_model(max_seq_len, \n",
        "#                     adapter_size=adapter_size,\n",
        "#                     init_ckpt_file=trained_ckpt_file)\n",
        "\n",
        "#_, train_acc = model.evaluate(to_model_ds(train_tfrecord_file, drop_remainder=False))\n",
        "#_, test_acc = model.evaluate(to_model_ds(test_tfrecord_file, drop_remainder=False))\n",
        "\n",
        "print(\"train acc\", train_acc)\n",
        "print(\" test acc\", test_acc)"
      ],
      "execution_count": 28,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "train acc 0.99739593\n",
            " test acc 0.94158655\n",
            "CPU times: user 318 µs, sys: 52 µs, total: 370 µs\n",
            "Wall time: 267 µs\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Mt-Rv-2hsW1j",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        ""
      ],
      "execution_count": 0,
      "outputs": []
    }
  ]
}