{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Image Captioning with Attention in Tensorflow 2.0"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This notebook modifies the [Image Captioning with Attention Tensorflow 2.0 notebook](https://colab.sandbox.google.com/github/tensorflow/docs/blob/master/site/en/r2/tutorials/text/image_captioning.ipynb)\n",
    "to work with kubeflow pipelines.  This pipeline creates a model that can caption an image."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Before running notebook:\n",
    "Make sure you completed the setup instructions in the README (including creating the base image).\n",
    "\n",
    "### Install Kubeflow pipelines\n",
    "Install the `kfp` package if you haven't already."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!pip3 install kfp --upgrade"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Activate service account credentials\n",
    "This allows for using `gsutil` from the notebook to upload the dataset to GCS."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!gcloud auth activate-service-account --key-file=${GOOGLE_APPLICATION_CREDENTIALS}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Download dataset and upload to GCS "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "First, you have to download the [MS COCO dataset](http://cocodataset.org/#download).  This sample uses both the 2014 train images and 2014 train/val annotations.  The following cells download a small subset (<1000 imgs) of the dataset and the annotations to the GCS bucket specified below with `GCS_DATASET_PATH`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Location to download dataset and put onto GCS (should be associated\n",
    "# with Kubeflow project)\n",
    "GCS_BUCKET = 'gs://[YOUR-BUCKET-NAME]'\n",
    "GCS_DATASET_PATH = GCS_BUCKET + '/ms-coco'"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Download images\n",
    "Downloads images to `${GCS_DATASET_PATH}/train2014/train2014`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Download images (use -x to ignore ~99% of images)\n",
    "!gsutil -m rsync -x \".*0\\.jpg|.*1\\.jpg|.*2\\.jpg|.*3\\.jpg|.*4\\.jpg|.*5\\.jpg|.*6\\.jpg|.*7\\.jpg|.*8\\.jpg|.*09\\.jpg|.*19\\.jpg|.*29\\.jpg|.*39\\.jpg|.*49\\.jpg|.*59\\.jpg|.*69\\.jpg|.*79\\.jpg|.*89\\.jpg\" gs://images.cocodataset.org/train2014 {GCS_DATASET_PATH}/train2014/train2014\n",
    "\n",
    "# To download the entire dataset uncomment and use the following command instead\n",
    "# !gsutil -m rsync gs://images.cocodataset.org/train2014 {GCS_DATASET_PATH}/train2014/train2014"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Download annotations\n",
    "For some reason MS COCO blocks using `gsutil` with the annotations (GitHub issue [here](https://github.com/cocodataset/cocoapi/issues/216)).  You can work around this by downloading it locally, and then uploading it to GCS."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Download to local, upload to GCS, then delete local download\n",
    "!wget http://images.cocodataset.org/annotations/annotations_trainval2014.zip\n",
    "!unzip annotations_trainval2014.zip -d annotations_trainval2014\n",
    "!gsutil -m cp -r annotations_trainval2014 {GCS_DATASET_PATH}\n",
    "!rm -r annotations_trainval2014\n",
    "!rm annotations_trainval2014.zip"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Setup project info and imports"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Kubeflow project settings\n",
    "PROJECT_NAME = '[YOUR-PROJECT-NAME]' \n",
    "PIPELINE_STORAGE_PATH = GCS_BUCKET + '/ms-coco/components' # path to save pipeline component images\n",
    "BASE_IMAGE = 'gcr.io/%s/img-cap:latest' % PROJECT_NAME # using image created in README instructions\n",
    "\n",
    "# Target images for creating components\n",
    "PREPROCESS_IMG = 'gcr.io/%s/ms-coco/preprocess:latest' % PROJECT_NAME\n",
    "TOKENIZE_IMG = 'gcr.io/%s/ms-coco/tokenize:latest' % PROJECT_NAME\n",
    "TRAIN_IMG = 'gcr.io/%s/ms-coco/train:latest' % PROJECT_NAME\n",
    "PREDICT_IMG = 'gcr.io/%s/ms-coco/predict:latest' % PROJECT_NAME"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import kfp\n",
    "import kfp.dsl as dsl\n",
    "from kfp import compiler"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Create pipeline components"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Data preprocessing component\n",
    "This component takes `num_examples` images from `dataset_path` and feeds them through the deep CNN inceptionV3 (without the head).  The model outputs a tensor of shape `(64 x 2048)` that represents (2048) features obtained after dividing the image into an 8x8 (64) grid. The resulting model outputs are stored in `OUTPUT_DIR`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "@dsl.python_component(\n",
    "    name='img_data_preprocessing',\n",
    "    description='preprocesses images with inceptionV3',\n",
    "    base_image=BASE_IMAGE\n",
    ")\n",
    "def preprocess(dataset_path: str, num_examples: int, OUTPUT_DIR: str, \n",
    "        batch_size: int) -> str:\n",
    "    import json\n",
    "    import numpy as np\n",
    "    import tensorflow as tf\n",
    "    from tensorflow.python.lib.io import file_io\n",
    "    from sklearn.utils import shuffle\n",
    "    \n",
    "    if OUTPUT_DIR == 'default':\n",
    "        OUTPUT_DIR = dataset_path + '/preprocess/'\n",
    "    \n",
    "    annotation_file = dataset_path + '/annotations_trainval2014/annotations/captions_train2014.json'\n",
    "    PATH = dataset_path + '/train2014/train2014/'\n",
    "    files_downloaded = tf.io.gfile.listdir(PATH)\n",
    "    \n",
    "    # Read the json file (CHANGE open() TO file_io.FileIO to use GCS)\n",
    "    with file_io.FileIO(annotation_file, 'r') as f:\n",
    "        annotations = json.load(f)\n",
    "\n",
    "    # Store captions and image names in vectors\n",
    "    all_captions = []\n",
    "    all_img_name_vector = []\n",
    "    \n",
    "    print('Determining which images are in storage...')\n",
    "    for annot in annotations['annotations']:\n",
    "        caption = '<start> ' + annot['caption'] + ' <end>'\n",
    "        image_id = annot['image_id']\n",
    "        img_name = 'COCO_train2014_' + '%012d.jpg' % (image_id)\n",
    "        full_coco_image_path = PATH + img_name\n",
    "        \n",
    "        if img_name in files_downloaded: # Only have subset\n",
    "            all_img_name_vector.append(full_coco_image_path)\n",
    "            all_captions.append(caption)\n",
    "\n",
    "    # Shuffle captions and image_names together\n",
    "    train_captions, img_name_vector = shuffle(all_captions,\n",
    "                                              all_img_name_vector,\n",
    "                                              random_state=1)\n",
    "\n",
    "    # Select the first num_examples captions/imgs from the shuffled set\n",
    "    train_captions = train_captions[:num_examples]\n",
    "    img_name_vector = img_name_vector[:num_examples]\n",
    "    \n",
    "\n",
    "    \n",
    "    # Preprocess the images before feeding into inceptionV3\n",
    "    def load_image(image_path):\n",
    "        img = tf.io.read_file(image_path)\n",
    "        img = tf.image.decode_jpeg(img, channels=3)\n",
    "        img = tf.image.resize(img, (299, 299))\n",
    "        img = tf.keras.applications.inception_v3.preprocess_input(img)\n",
    "        return img, image_path\n",
    "    \n",
    "    # Create model for processing images \n",
    "    image_model = tf.keras.applications.InceptionV3(include_top=False,\n",
    "                                                weights='imagenet')\n",
    "    new_input = image_model.input\n",
    "    hidden_layer = image_model.layers[-1].output\n",
    "    image_features_extract_model = tf.keras.Model(new_input, hidden_layer)\n",
    "    \n",
    "    # Save extracted features in GCS\n",
    "    print('Extracting features from images...')\n",
    "    \n",
    "    # Get unique images\n",
    "    encode_train = sorted(set(img_name_vector))\n",
    "    \n",
    "    image_dataset = tf.data.Dataset.from_tensor_slices(encode_train)\n",
    "    image_dataset = image_dataset.map(\n",
    "        load_image, num_parallel_calls=tf.data.experimental.AUTOTUNE).batch(batch_size)\n",
    "    \n",
    "    for img, path in image_dataset:\n",
    "        batch_features = image_features_extract_model(img)\n",
    "        batch_features = tf.reshape(batch_features,\n",
    "                              (batch_features.shape[0], -1, batch_features.shape[3]))\n",
    "\n",
    "        for bf, p in zip(batch_features, path):\n",
    "            path_of_feature = p.numpy().decode(\"utf-8\")\n",
    "            \n",
    "            # Save to a different location and as numpy array\n",
    "            path_of_feature = path_of_feature.replace('.jpg', '.npy')\n",
    "            path_of_feature = path_of_feature.replace(PATH, OUTPUT_DIR)\n",
    "            np.save(file_io.FileIO(path_of_feature, 'w'), bf.numpy())\n",
    "    \n",
    "    # Create array for locations of preprocessed images\n",
    "    preprocessed_imgs = [img.replace('.jpg', '.npy') for img in img_name_vector]\n",
    "    preprocessed_imgs = [img.replace(PATH, OUTPUT_DIR) for img in preprocessed_imgs]\n",
    "    \n",
    "    # Save train_captions and preprocessed_imgs to file\n",
    "    train_cap_path = OUTPUT_DIR + 'train_captions.npy' # array of captions\n",
    "    preprocessed_imgs_path = OUTPUT_DIR + 'preprocessed_imgs.py'# array of paths to preprocessed images\n",
    "    \n",
    "    train_captions = np.array(train_captions)\n",
    "    np.save(file_io.FileIO(train_cap_path, 'w'), train_captions)\n",
    "    \n",
    "    preprocessed_imgs = np.array(preprocessed_imgs)\n",
    "    np.save(file_io.FileIO(preprocessed_imgs_path, 'w'), preprocessed_imgs)\n",
    "    \n",
    "    return (train_cap_path, preprocessed_imgs_path)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "preprocessing_img_op = compiler.build_python_component(\n",
    "    component_func=preprocess,\n",
    "    staging_gcs_path=PIPELINE_STORAGE_PATH,\n",
    "    base_image=BASE_IMAGE,\n",
    "    dependency=[kfp.compiler.VersionedDependency(name='scikit-learn', version='0.21.2')],\n",
    "    target_image=PREPROCESS_IMG)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Tokenizing component"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This component takes the training captions from the previous step and tokenizes them to convert them into numerical values so that they can be fed into the model as input.  It outputs the tokenized captions in `OUTPUT_DIR`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "@dsl.python_component(\n",
    "    name='tokenize_captions',\n",
    "    description='Tokenize captions to create training data',\n",
    "    base_image=BASE_IMAGE\n",
    ")\n",
    "def tokenize_captions(dataset_path: str, preprocess_output: str, OUTPUT_DIR: str,\n",
    "        top_k: int) -> str:\n",
    "    import pickle\n",
    "    import tensorflow as tf\n",
    "    import numpy as np\n",
    "    from tensorflow.python.lib.io import file_io\n",
    "    from io import BytesIO\n",
    "    from ast import literal_eval as make_tuple\n",
    "    \n",
    "    # Convert output from string to tuple and unpack\n",
    "    preprocess_output = make_tuple(preprocess_output)\n",
    "    train_caption_path = preprocess_output[0]\n",
    "    \n",
    "    if OUTPUT_DIR == 'default':\n",
    "        OUTPUT_DIR = dataset_path + '/tokenize/'\n",
    "    \n",
    "    tokenizer = tf.keras.preprocessing.text.Tokenizer(num_words=top_k,\n",
    "                                                  oov_token=\"<unk>\",\n",
    "                                                  filters='!\"#$%&()*+.,-/:;=?@[\\]^_`{|}~ ')\n",
    "    f = BytesIO(file_io.read_file_to_string(train_caption_path, \n",
    "                                            binary_mode=True))\n",
    "    train_captions = np.load(f)\n",
    "    \n",
    "    # Tokenize captions\n",
    "    tokenizer.fit_on_texts(train_captions)\n",
    "    train_seqs = tokenizer.texts_to_sequences(train_captions)\n",
    "    tokenizer.word_index['<pad>'] = 0\n",
    "    tokenizer.index_word[0] = '<pad>'\n",
    "    \n",
    "    cap_vector = tf.keras.preprocessing.sequence.pad_sequences(train_seqs, padding='post')\n",
    "    \n",
    "    # Find the maximum length of any caption in our dataset\n",
    "    def calc_max_length(tensor):\n",
    "        return max(len(t) for t in tensor)\n",
    "    \n",
    "    max_length = calc_max_length(train_seqs)\n",
    "    \n",
    "    # Save tokenizer\n",
    "    tokenizer_file_path = OUTPUT_DIR + 'tokenizer.pickle'\n",
    "    with file_io.FileIO(tokenizer_file_path, 'wb') as output:\n",
    "        pickle.dump(tokenizer, output, protocol=pickle.HIGHEST_PROTOCOL)\n",
    "        \n",
    "    # Save train_seqs\n",
    "    cap_vector_file_path = OUTPUT_DIR + 'cap_vector.npy'\n",
    "    np.save(file_io.FileIO(cap_vector_file_path, 'w'), cap_vector)\n",
    "    \n",
    "    return str(max_length), tokenizer_file_path, cap_vector_file_path"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "tokenize_captions_op = compiler.build_python_component(\n",
    "    component_func=tokenize_captions,\n",
    "    staging_gcs_path=PIPELINE_STORAGE_PATH,\n",
    "    base_image=BASE_IMAGE,\n",
    "    target_image=TOKENIZE_IMG)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Component for training model (and saving it)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This component trains the model by creating a `tf.data.Dataset` from the captions and preprocessed images.  The trained model is saved in `train_output_dir/checkpoints/`.  The training loss is plotted in tensorboard. There are various parameters of the model(s) that can be tuned, but it uses the values from the original notebook by default.  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "@dsl.python_component(\n",
    "    name='model_training',\n",
    "    description='Trains image captioning model',\n",
    "    base_image=BASE_IMAGE\n",
    ")\n",
    "def train_model(dataset_path: str, preprocess_output: str, \n",
    "        tokenizing_output: str, train_output_dir: str, valid_output_dir: str, \n",
    "        batch_size: int, embedding_dim: int, units: int, EPOCHS: int)-> str:\n",
    "    import json\n",
    "    import time\n",
    "    import pickle\n",
    "    import models\n",
    "    import numpy as np\n",
    "    import tensorflow as tf\n",
    "    from io import BytesIO\n",
    "    from datetime import datetime\n",
    "    from sklearn.model_selection import train_test_split\n",
    "    from tensorflow.python.lib.io import file_io\n",
    "    from ast import literal_eval as make_tuple\n",
    "    \n",
    "    # Convert output from string to tuple and unpack\n",
    "    preprocess_output = make_tuple(preprocess_output)\n",
    "    tokenizing_output = make_tuple(tokenizing_output)\n",
    "    \n",
    "    # Unpack tuples\n",
    "    preprocessed_imgs_path = preprocess_output[1]\n",
    "    tokenizer_path = tokenizing_output[1]\n",
    "    cap_vector_file_path = tokenizing_output[2]\n",
    "    \n",
    "    if valid_output_dir == 'default':\n",
    "        valid_output_dir = dataset_path + '/valid/'\n",
    "    \n",
    "    if train_output_dir == 'default':\n",
    "        train_output_dir = dataset_path + '/train/'\n",
    "    \n",
    "    # load img_name_vector\n",
    "    f = BytesIO(file_io.read_file_to_string(preprocessed_imgs_path, binary_mode=True))\n",
    "    img_name_vector = np.load(f)\n",
    "    \n",
    "    # Load cap_vector\n",
    "    f = BytesIO(file_io.read_file_to_string(cap_vector_file_path, binary_mode=True))\n",
    "    cap_vector = np.load(f)\n",
    "    \n",
    "    # Load tokenizer\n",
    "    with file_io.FileIO(tokenizer_path, 'rb') as src:\n",
    "        tokenizer = pickle.load(src)\n",
    "    \n",
    "    # Split data into training and testing\n",
    "    img_name_train, img_name_val, cap_train, cap_val = train_test_split(\n",
    "                                                            img_name_vector,\n",
    "                                                            cap_vector,\n",
    "                                                            test_size=0.2,\n",
    "                                                            random_state=0)\n",
    "    \n",
    "    # Create tf.data dataset for training\n",
    "    BUFFER_SIZE = 1000 # common size used for shuffling dataset\n",
    "    vocab_size = len(tokenizer.word_index) + 1\n",
    "    num_steps = len(img_name_train) // batch_size\n",
    "    \n",
    "    # Shape of the vector extracted from InceptionV3 is (64, 2048)\n",
    "    features_shape = 2048\n",
    "    \n",
    "    # Load the numpy files\n",
    "    def map_func(img_name, cap):\n",
    "        f = BytesIO(file_io.read_file_to_string(img_name.decode('utf-8'), binary_mode=True))\n",
    "        img_tensor = np.load(f)\n",
    "        return img_tensor, cap\n",
    "    \n",
    "    dataset = tf.data.Dataset.from_tensor_slices((img_name_train, cap_train))\n",
    "\n",
    "    # Use map to load the numpy files in parallel\n",
    "    dataset = dataset.map(lambda item1, item2: tf.numpy_function(\n",
    "              map_func, [item1, item2], [tf.float32, tf.int32]),\n",
    "              num_parallel_calls=tf.data.experimental.AUTOTUNE)\n",
    "\n",
    "    # Shuffle and batch\n",
    "    dataset = dataset.shuffle(BUFFER_SIZE).batch(batch_size)\n",
    "    dataset = dataset.prefetch(buffer_size=tf.data.experimental.AUTOTUNE)\n",
    "    \n",
    "    # get models from models.py\n",
    "    encoder = models.CNN_Encoder(embedding_dim)\n",
    "    decoder = models.RNN_Decoder(embedding_dim, units, vocab_size)\n",
    "    \n",
    "    optimizer = tf.keras.optimizers.Adam()\n",
    "    loss_object = tf.keras.losses.SparseCategoricalCrossentropy(\n",
    "        from_logits=True, reduction='none')\n",
    "    \n",
    "    # Create loss function\n",
    "    def loss_function(real, pred):\n",
    "        mask = tf.math.logical_not(tf.math.equal(real, 0))\n",
    "        loss_ = loss_object(real, pred)\n",
    "\n",
    "        mask = tf.cast(mask, dtype=loss_.dtype)\n",
    "        loss_ *= mask\n",
    "\n",
    "        return tf.reduce_mean(loss_)\n",
    "    \n",
    "    # Create check point for training model\n",
    "    ckpt = tf.train.Checkpoint(encoder=encoder,\n",
    "                           decoder=decoder,\n",
    "                           optimizer = optimizer)\n",
    "    ckpt_manager = tf.train.CheckpointManager(ckpt, train_output_dir + 'checkpoints/', max_to_keep=5)\n",
    "    start_epoch = 0\n",
    "    if ckpt_manager.latest_checkpoint:\n",
    "        start_epoch = int(ckpt_manager.latest_checkpoint.split('-')[-1])\n",
    "            \n",
    "    # Create training step\n",
    "    loss_plot = []\n",
    "    @tf.function\n",
    "    def train_step(img_tensor, target):\n",
    "        loss = 0\n",
    "\n",
    "        # initializing the hidden state for each batch\n",
    "        # because the captions are not related from image to image\n",
    "        hidden = decoder.reset_state(batch_size=target.shape[0])\n",
    "\n",
    "        dec_input = tf.expand_dims([tokenizer.word_index['<start>']] * batch_size, 1)\n",
    "\n",
    "        with tf.GradientTape() as tape:\n",
    "            features = encoder(img_tensor)\n",
    "\n",
    "            for i in range(1, target.shape[1]):\n",
    "                # passing the features through the decoder\n",
    "                predictions, hidden, _ = decoder(dec_input, features, hidden)\n",
    "\n",
    "                loss += loss_function(target[:, i], predictions)\n",
    "\n",
    "                # using teacher forcing\n",
    "                dec_input = tf.expand_dims(target[:, i], 1)\n",
    "\n",
    "        total_loss = (loss / int(target.shape[1]))\n",
    "\n",
    "        trainable_variables = encoder.trainable_variables + decoder.trainable_variables\n",
    "\n",
    "        gradients = tape.gradient(loss, trainable_variables)\n",
    "\n",
    "        optimizer.apply_gradients(zip(gradients, trainable_variables))\n",
    "\n",
    "        return loss, total_loss\n",
    "    \n",
    "    # Create summary writers and loss for plotting loss in tensorboard\n",
    "    tensorboard_dir = train_output_dir + 'logs/' + datetime.now().strftime(\"%Y%m%d-%H%M%S\")\n",
    "    train_summary_writer = tf.summary.create_file_writer(tensorboard_dir)\n",
    "    train_loss = tf.keras.metrics.Mean('train_loss', dtype=tf.float32)\n",
    "    \n",
    "    # Train model\n",
    "    path_to_most_recent_ckpt = None\n",
    "    for epoch in range(start_epoch, EPOCHS):\n",
    "        start = time.time()\n",
    "        total_loss = 0\n",
    "\n",
    "        for (batch, (img_tensor, target)) in enumerate(dataset):\n",
    "            batch_loss, t_loss = train_step(img_tensor, target)\n",
    "            total_loss += t_loss\n",
    "            train_loss(t_loss)\n",
    "            if batch % 100 == 0:\n",
    "                print ('Epoch {} Batch {} Loss {:.4f}'.format(\n",
    "                  epoch + 1, batch, batch_loss.numpy() / int(target.shape[1])))\n",
    "        \n",
    "        \n",
    "        \n",
    "        # Storing the epoch end loss value to plot in tensorboard\n",
    "        with train_summary_writer.as_default():\n",
    "            tf.summary.scalar('loss per epoch', train_loss.result(), step=epoch)\n",
    "        \n",
    "        train_loss.reset_states()\n",
    "        \n",
    "        if epoch % 5 == 0:\n",
    "            path_to_most_recent_ckpt = ckpt_manager.save()\n",
    "\n",
    "        print ('Epoch {} Loss {:.6f}'.format(epoch + 1,\n",
    "                                             total_loss/num_steps))\n",
    "        print ('Time taken for 1 epoch {} sec\\n'.format(time.time() - start))\n",
    "    \n",
    "    # Add plot of loss in tensorboard\n",
    "    metadata ={\n",
    "        'outputs': [{\n",
    "            'type': 'tensorboard',\n",
    "            'source': tensorboard_dir,\n",
    "        }]\n",
    "    }\n",
    "    with open('/mlpipeline-ui-metadata.json', 'w') as f:\n",
    "        json.dump(metadata, f)\n",
    "    \n",
    "    # Save validation data to use for predictions\n",
    "    val_cap_path = valid_output_dir + 'captions.npy'\n",
    "    np.save(file_io.FileIO(val_cap_path, 'w'), cap_val)\n",
    "    \n",
    "    val_img_path = valid_output_dir + 'images.npy'\n",
    "    np.save(file_io.FileIO(val_img_path, 'w'), img_name_val)\n",
    "    \n",
    "    return path_to_most_recent_ckpt, val_cap_path, val_img_path"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "model_train_op = compiler.build_python_component(\n",
    "    component_func=train_model,\n",
    "    staging_gcs_path=PIPELINE_STORAGE_PATH,\n",
    "    base_image=BASE_IMAGE,\n",
    "    dependency=[kfp.compiler.VersionedDependency(name='scikit-learn', version='0.21.2')],\n",
    "    target_image=TRAIN_IMG)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Component for model prediction"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This component uses the model to predict on a new image.  It prints the predicted and real caption in the logs and outputs the first 10 attention images with captions in tensorboard.  (Currently Kubeflow [only supports up to 10 outputs](https://github.com/kubeflow/pipelines/issues/1641) Tensorboard)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "@dsl.python_component(\n",
    "    name='model_predictions',\n",
    "    description='Predicts on images in validation set',\n",
    "    base_image=BASE_IMAGE\n",
    ")\n",
    "def predict(dataset_path: str, tokenizing_output: str, \n",
    "        model_train_output: str, preprocess_output_dir: str, \n",
    "        valid_output_dir: str, embedding_dim: int, units: int):\n",
    "    import pickle\n",
    "    import json\n",
    "    import models\n",
    "    import matplotlib.pyplot as plt\n",
    "    import numpy as np\n",
    "    import tensorflow as tf\n",
    "    from datetime import datetime\n",
    "    from io import BytesIO\n",
    "    from tensorflow.python.lib.io import file_io\n",
    "    from ast import literal_eval as make_tuple\n",
    "    \n",
    "    tokenizing_output = make_tuple(tokenizing_output)\n",
    "    model_train_output = make_tuple(model_train_output)\n",
    "    \n",
    "    # Unpack tuples\n",
    "    max_length = int(tokenizing_output[0])\n",
    "    tokenizer_path = tokenizing_output[1]\n",
    "    model_path = model_train_output[0]\n",
    "    val_cap_path = model_train_output[1]\n",
    "    val_img_path = model_train_output[2]\n",
    "    \n",
    "    if preprocess_output_dir == 'default':\n",
    "        preprocess_output_dir = dataset_path + '/preprocess/'\n",
    "    \n",
    "    if valid_output_dir == 'default':\n",
    "        valid_output_dir = dataset_path + '/valid/'\n",
    "        \n",
    "    tensorboard_dir = valid_output_dir + 'logs' + datetime.now().strftime(\"%Y%m%d-%H%M%S\")\n",
    "    summary_writer = tf.summary.create_file_writer(tensorboard_dir)\n",
    "\n",
    "    # Load tokenizer, model, test_captions, and test_imgs\n",
    "    \n",
    "    # Load tokenizer\n",
    "    with file_io.FileIO(tokenizer_path, 'rb') as src:\n",
    "        tokenizer = pickle.load(src)\n",
    "    \n",
    "    vocab_size = len(tokenizer.word_index) + 1\n",
    "    \n",
    "    # Shape of the vector extracted from InceptionV3 is (64, 2048)\n",
    "    attention_features_shape = 64\n",
    "    features_shape = 2048\n",
    "    \n",
    "    encoder = models.CNN_Encoder(embedding_dim)\n",
    "    decoder = models.RNN_Decoder(embedding_dim, units, vocab_size)\n",
    "    \n",
    "    # Load model from checkpoint (encoder, decoder)\n",
    "    optimizer = tf.keras.optimizers.Adam()\n",
    "    ckpt = tf.train.Checkpoint(encoder=encoder,\n",
    "                           decoder=decoder, optimizer=optimizer)\n",
    "    ckpt.restore(model_path).expect_partial()\n",
    "    \n",
    "    # Load test captions\n",
    "    f = BytesIO(file_io.read_file_to_string(val_cap_path, \n",
    "                                            binary_mode=True))\n",
    "    cap_val = np.load(f)\n",
    "    \n",
    "    # load test images\n",
    "    f = BytesIO(file_io.read_file_to_string(val_img_path, \n",
    "                                            binary_mode=True))\n",
    "    img_name_val = np.load(f)\n",
    "    \n",
    "    # To get original image locations, replace .npy extension with .jpg and \n",
    "    # replace preprocessed path with path original images\n",
    "    PATH = dataset_path + '/train2014/train2014/'\n",
    "    img_name_val = [img.replace('.npy', '.jpg') for img in img_name_val]\n",
    "    img_name_val = [img.replace(preprocess_output_dir, PATH) for img in img_name_val]\n",
    "    \n",
    "    image_model = tf.keras.applications.InceptionV3(include_top=False,\n",
    "                                                weights='imagenet')\n",
    "    new_input = image_model.input\n",
    "    hidden_layer = image_model.layers[-1].output\n",
    "\n",
    "    image_features_extract_model = tf.keras.Model(new_input, hidden_layer)\n",
    "    \n",
    "    # Preprocess the images using InceptionV3\n",
    "    def load_image(image_path):\n",
    "        img = tf.io.read_file(image_path)\n",
    "        img = tf.image.decode_jpeg(img, channels=3)\n",
    "        img = tf.image.resize(img, (299, 299))\n",
    "        img = tf.keras.applications.inception_v3.preprocess_input(img)\n",
    "        return img, image_path\n",
    "    \n",
    "    # Run predictions\n",
    "    def evaluate(image):\n",
    "        attention_plot = np.zeros((max_length, attention_features_shape))\n",
    "\n",
    "        hidden = decoder.reset_state(batch_size=1)\n",
    "\n",
    "        temp_input = tf.expand_dims(load_image(image)[0], 0)\n",
    "        img_tensor_val = image_features_extract_model(temp_input)\n",
    "        img_tensor_val = tf.reshape(img_tensor_val, (img_tensor_val.shape[0], -1, img_tensor_val.shape[3]))\n",
    "\n",
    "        features = encoder(img_tensor_val)\n",
    "\n",
    "        dec_input = tf.expand_dims([tokenizer.word_index['<start>']], 0)\n",
    "        result = []\n",
    "\n",
    "        for i in range(max_length):\n",
    "            predictions, hidden, attention_weights = decoder(dec_input, features, hidden)\n",
    "\n",
    "            attention_plot[i] = tf.reshape(attention_weights, (-1, )).numpy()\n",
    "\n",
    "            predicted_id = tf.argmax(predictions[0]).numpy()\n",
    "            result.append(tokenizer.index_word[predicted_id])\n",
    "\n",
    "            if tokenizer.index_word[predicted_id] == '<end>':\n",
    "                return result, attention_plot\n",
    "\n",
    "            dec_input = tf.expand_dims([predicted_id], 0)\n",
    "\n",
    "        attention_plot = attention_plot[:len(result), :]\n",
    "        return result, attention_plot\n",
    "    \n",
    "    # Modified to plot images on tensorboard\n",
    "    def plot_attention(image, result, attention_plot):\n",
    "        img = tf.io.read_file(image)\n",
    "        img = tf.image.decode_jpeg(img, channels=3)\n",
    "        temp_image = np.array(img.numpy())\n",
    "        \n",
    "        len_result = len(result)\n",
    "        for l in range(min(len_result, 10)): # Tensorboard only supports 10 imgs\n",
    "            temp_att = np.resize(attention_plot[l], (8, 8))\n",
    "            plt.title(result[l])\n",
    "            img = plt.imshow(temp_image)\n",
    "            plt.imshow(temp_att, cmap='gray', alpha=0.6, extent=img.get_extent())\n",
    "            \n",
    "            # Save plt to image to access in tensorboard\n",
    "            buf = BytesIO()\n",
    "            plt.savefig(buf, format='png')\n",
    "            buf.seek(0)\n",
    "            \n",
    "            final_im = tf.image.decode_png(buf.getvalue(), channels=4)\n",
    "            final_im = tf.expand_dims(final_im, 0)\n",
    "            with summary_writer.as_default():\n",
    "                tf.summary.image(\"attention\", final_im, step=l)\n",
    "    \n",
    "    # Select a random image to caption from validation set\n",
    "    rid = np.random.randint(0, len(img_name_val))\n",
    "    image = img_name_val[rid]\n",
    "    real_caption = ' '.join([tokenizer.index_word[i] for i in cap_val[rid] if i not in [0]])\n",
    "    result, attention_plot = evaluate(image)\n",
    "    print ('Image:', image)\n",
    "    print ('Real Caption:', real_caption)\n",
    "    print ('Prediction Caption:', ' '.join(result))\n",
    "    plot_attention(image, result, attention_plot)\n",
    "    \n",
    "    # Plot attention images on tensorboard\n",
    "    metadata = {\n",
    "        'outputs': [{\n",
    "            'type': 'tensorboard',\n",
    "            'source': tensorboard_dir,\n",
    "        }]\n",
    "    }\n",
    "    with open('/mlpipeline-ui-metadata.json', 'w') as f:\n",
    "        json.dump(metadata, f)\n",
    "    "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "predict_op = compiler.build_python_component(\n",
    "    component_func=predict,\n",
    "    staging_gcs_path=PIPELINE_STORAGE_PATH,\n",
    "    base_image=BASE_IMAGE,\n",
    "    dependency=[kfp.compiler.VersionedDependency(name='matplotlib', version='3.1.0')],\n",
    "    target_image=PREDICT_IMG)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Create and run pipeline\n",
    "### Create pipeline\n",
    "The pipeline parameters are specified below in the `caption pipeline` function signature.  Using the value `'default'` for the output directories saves them in a subdirectory of `GCS_DATASET_PATH`.\n",
    "\n",
    "### Requirements\n",
    "* The pipeline can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.\n",
    "* Read/write permissions for the storage buckets."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "@dsl.pipeline(\n",
    "    name='Image Captioning Pipeline',\n",
    "    description='A pipeline that trains a model to caption images'\n",
    ")\n",
    "def caption_pipeline(\n",
    "    dataset_path=GCS_DATASET_PATH,\n",
    "    num_examples=30000,\n",
    "    epochs=20,\n",
    "    training_batch_size=64,\n",
    "    hidden_state_size=512,\n",
    "    vocab_size=5000,\n",
    "    embedding_dim=256,\n",
    "    preprocessing_batch_size=16,\n",
    "    preprocessing_output_dir='default',\n",
    "    tokenizing_output_dir='default',\n",
    "    training_output_dir='default',\n",
    "    validation_output_dir='default',\n",
    "    ): \n",
    "    \n",
    "    preprocessing_img_task = preprocessing_img_op(\n",
    "        dataset_path, \n",
    "        output_dir=preprocessing_output_dir,\n",
    "        batch_size=preprocessing_batch_size, \n",
    "        num_examples=num_examples)\n",
    "    \n",
    "    tokenize_captions_task = tokenize_captions_op(\n",
    "        dataset_path, \n",
    "        preprocessing_img_task.output, \n",
    "        output_dir=tokenizing_output_dir, \n",
    "        top_k=vocab_size)\n",
    "    \n",
    "    model_train_task = model_train_op(\n",
    "        dataset_path, \n",
    "        preprocessing_img_task.output,\n",
    "        tokenize_captions_task.output,\n",
    "        train_output_dir=training_output_dir, \n",
    "        valid_output_dir=validation_output_dir,\n",
    "        batch_size=training_batch_size, \n",
    "        embedding_dim=embedding_dim, \n",
    "        units=hidden_state_size, \n",
    "        epochs=epochs)\n",
    "    \n",
    "    predict_task = predict_op(\n",
    "        dataset_path,\n",
    "        tokenize_captions_task.output, \n",
    "        model_train_task.output,\n",
    "        preprocess_output_dir=preprocessing_output_dir,\n",
    "        valid_output_dir=validation_output_dir,\n",
    "        embedding_dim=embedding_dim,\n",
    "        units=hidden_state_size)\n",
    "\n",
    "    # The pipeline should be able to authenticate to GCP.\n",
    "    # Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.\n",
    "    #\n",
    "    # For example, you may uncomment the following lines to use GSA keys.\n",
    "    # from kfp.gcp import use_gcp_secret\n",
    "    # kfp.dsl.get_pipeline_conf().add_op_transformer(use_gcp_secret('user-gcp-sa'))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Run pipeline"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Test run to make sure all parts of the pipeline are working properly\n",
    "arguments = {\n",
    "    'dataset_path': GCS_DATASET_PATH, \n",
    "    'num_examples': 100, # Small test to make sure pipeline functions properly\n",
    "    'training_batch_size': 16, # has to be smaller since only training on 80/100 examples \n",
    "}\n",
    "\n",
    "kfp.Client().create_run_from_pipeline_func(pipeline, arguments=arguments)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Model checkpoints are saved at `training_output_dir`, which is `[GCS_DATASET_PATH]/train/checkpoints/` by default."
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}