{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<h1> Text Classification using TensorFlow/Keras on AI Platform </h1>\n",
    "\n",
    "This notebook illustrates:\n",
    "<ol>\n",
    "<li> Creating datasets for AI Platform using BigQuery\n",
    "<li> Creating a text classification model using the Estimator API with a Keras model\n",
    "<li> Training on Cloud AI Platform\n",
    "<li> Rerun with pre-trained embedding\n",
    "</ol>"
   ]
  },

  {
    "cell_type": "code",
    "metadata": {
      "id": "Nny3m465gKkY",
      "colab_type": "code",
      "colab": {}
    },
    "source": [
      "!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst"
    ],
    "execution_count": null,
    "outputs": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
"Collecting google-cloud-bigquery==1.25.0\n",
"Downloading google_cloud_bigquery-1.25.0-py2.py3-none-any.whl (169 kB)\n",
     "|████████████████████████████████| 169 kB 4.7 MB/s eta 0:00:01\n",

"Requirement already satisfied:  six in /home/jupyter/.local/lib/python3.7/site-packages (from google-cloud-bigquery==1.25.0)\n",
"Requirement already satisfied: google-auth in /usr/local/lib/python3.7/site-packages (from google-cloud-bigquery==1.25.0)\n",
"Requirement already satisfied: google-resumable-media in /usr/local/lib/python3.7/dist-packages (from google-cloud-bigquery==1.25.0)\n",
"Requirement already satisfied: google-cloud-core in /usr/local/lib/python3.7/dist-packages (from google-cloud-bigquery==1.25.0)\n",
"Requirement already satisfied: protobuf in /usr/local/lib/python3.7/dist-packages (from google-cloud-bigquery==1.25.0)\n",
"Requirement already satisfied: google-api-core in /usr/local/lib/python3.7/dist-packages (from google-cloud-bigquery==1.25.0)\n",
"Requirement already satisfied: cachetools in /usr/local/lib/python3.7/dist-packages(from google-cloud-bigquery==1.25.0)\n",
"Requirement already satisfied: rsa in /usr/local/lib/python3.7/dist-packages (from google-cloud-bigquery==1.25.0)\n",
"Requirement already satisfied: pyasn1-modules in /usr/local/lib/python3.7/dist-packages (from google-cloud-bigquery==1.25.0)\n",
"Requirement already satisfied: requests in /usr/local/lib/python3.7/dist-packages (from google-cloud-bigquery==1.25.0)\n",
"Requirement already satisfied: googleapis-common-protos in /usr/local/lib/python3.7/dist-packages (from google-cloud-bigquery==1.25.0)\n",
"Requirement already satisfied: pyasn1 in /usr/local/lib/python3.7/dist-packages (from google-cloud-bigquery==1.25.0)\n",
"Requirement already satisfied: certifi in /usr/local/lib/python3.7/dist-packages (from google-cloud-bigquery==1.25.0)\n",
"Installing collected packages: google-resumable-media, google-cloud-bigquery\n",
"\u001b[33mWARNING: You are using pip version 20.1; however, version 20.2.3 is available."
     ]
    }
   ],
   "source": [
    "!pip install --user google-cloud-bigquery==1.25.0"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Note**: Restart your kernel to use updated packages."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Kindly ignore the deprecation warnings and incompatibility errors related to google-cloud-storage."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# change these to try this notebook out\n",
    "BUCKET = 'cloud-training-demos-ml'\n",
    "PROJECT = 'cloud-training-demos'\n",
    "REGION = 'us-central1'"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "os.environ['BUCKET'] = BUCKET\n",
    "os.environ['PROJECT'] = PROJECT\n",
    "os.environ['REGION'] = REGION\n",
    "os.environ['TFVERSION'] = '2.5'\n",
    "\n",
    "if 'COLAB_GPU' in os.environ:  # this is always set on Colab, the value is 0 or 1 depending on whether a GPU is attached\n",
    "  from google.colab import auth\n",
    "  auth.authenticate_user()\n",
    "  # download \"sidecar files\" since on Colab, this notebook will be on Drive\n",
    "  !rm -rf txtclsmodel\n",
    "  !git clone --depth 1 https://github.com/GoogleCloudPlatform/training-data-analyst\n",
    "  !mv  training-data-analyst/courses/machine_learning/deepdive/09_sequence/txtclsmodel/ .\n",
    "  !rm -rf training-data-analyst\n",
    "  # downgrade TensorFlow to the version this notebook has been tested with\n",
    "  !pip install --upgrade tensorflow==$TFVERSION"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
     {
      "name": "stdout",
      "output_type": "stream",
      "text": [
      "2.5.0"
      ]
     }
   ],
   "source": [
    "import tensorflow as tf\n",
    "print(tf.__version__)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We will look at the titles of articles and figure out whether the article came from the New York Times, TechCrunch or GitHub. \n",
    "\n",
    "We will use [hacker news](https://news.ycombinator.com/) as our data source. It is an aggregator that displays tech related headlines from various  sources."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Creating Dataset from BigQuery \n",
    "\n",
    "Hacker news headlines are available as a BigQuery public dataset. The [dataset](https://bigquery.cloud.google.com/table/bigquery-public-data:hacker_news.stories?tab=details) contains all headlines from the sites inception in October 2006 until October 2015. \n",
    "\n",
    "Here is a sample of the dataset:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%load_ext google.cloud.bigquery"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bigquery --project $PROJECT\n",
    "SELECT\n",
    "  url, title, score\n",
    "FROM\n",
    "  `bigquery-public-data.hacker_news.stories`\n",
    "WHERE\n",
    "  LENGTH(title) > 10\n",
    "  AND score > 10\n",
    "  AND LENGTH(url) > 0\n",
    "LIMIT 10"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's do some regular expression parsing in BigQuery to get the source of the newspaper article from the URL. For example, if the url is http://mobile.nytimes.com/...., I want to be left with <i>nytimes</i>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bigquery --project $PROJECT\n",
    "SELECT\n",
    "  ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.'))[OFFSET(1)] AS source,\n",
    "  COUNT(title) AS num_articles\n",
    "FROM\n",
    "  `bigquery-public-data.hacker_news.stories`\n",
    "WHERE\n",
    "  REGEXP_CONTAINS(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.com$')\n",
    "  AND LENGTH(title) > 10\n",
    "GROUP BY\n",
    "  source\n",
    "ORDER BY num_articles DESC\n",
    "LIMIT 10"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now that we have good parsing of the URL to get the source, let's put together a dataset of source and titles. This will be our labeled dataset for AI Platform."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from google.cloud import bigquery\n",
    "bq = bigquery.Client(project=PROJECT)\n",
    "\n",
    "query=\"\"\"\n",
    "SELECT source, LOWER(REGEXP_REPLACE(title, '[^a-zA-Z0-9 $.-]', ' ')) AS title FROM\n",
    "  (SELECT\n",
    "    ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.'))[OFFSET(1)] AS source,\n",
    "    title\n",
    "  FROM\n",
    "    `bigquery-public-data.hacker_news.stories`\n",
    "  WHERE\n",
    "    REGEXP_CONTAINS(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.com$')\n",
    "    AND LENGTH(title) > 10\n",
    "  )\n",
    "WHERE (source = 'github' OR source = 'nytimes' OR source = 'techcrunch')\n",
    "\"\"\"\n",
    "\n",
    "df = bq.query(query + \" LIMIT 5\").to_dataframe()\n",
    "df.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "For ML training, we will need to split our dataset into training and evaluation datasets (and perhaps an independent test dataset if we are going to do model or feature selection based on the evaluation dataset).  \n",
    "\n",
    "A simple, repeatable way to do this is to use the hash of a well-distributed column in our data (See https://www.oreilly.com/learning/repeatable-sampling-of-data-sets-in-bigquery-for-machine-learning)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "traindf = bq.query(query + \" AND ABS(MOD(FARM_FINGERPRINT(title), 4)) > 0\").to_dataframe()\n",
    "evaldf  = bq.query(query + \" AND ABS(MOD(FARM_FINGERPRINT(title), 4)) = 0\").to_dataframe()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Below we can see that roughly 75% of the data is used for training, and 25% for evaluation. \n",
    "\n",
    "We can also see that within each dataset, the classes are roughly balanced."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "traindf['source'].value_counts()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "evaldf['source'].value_counts()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Finally we will save our data, which is currently in-memory, to disk."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os, shutil\n",
    "DATADIR='data/txtcls'\n",
    "shutil.rmtree(DATADIR, ignore_errors=True)\n",
    "os.makedirs(DATADIR)\n",
    "traindf.to_csv( os.path.join(DATADIR,'train.tsv'), header=False, index=False, encoding='utf-8', sep='\\t')\n",
    "evaldf.to_csv( os.path.join(DATADIR,'eval.tsv'), header=False, index=False, encoding='utf-8', sep='\\t')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!head -3 data/txtcls/train.tsv"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!wc -l data/txtcls/*.tsv"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### TensorFlow/Keras Code\n",
    "\n",
    "Please explore the code in this <a href=\"txtclsmodel/trainer\">directory</a>: `model.py` contains the TensorFlow model and `task.py` parses command line arguments and launches off the training job.\n",
    "\n",
    "In particular look for the following:\n",
    "\n",
    "1. [tf.keras.preprocessing.text.Tokenizer.fit_on_texts()](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/text/Tokenizer#fit_on_texts) to generate a mapping from our word vocabulary to integers\n",
    "2. [tf.keras.preprocessing.text.Tokenizer.texts_to_sequences()](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/text/Tokenizer#texts_to_sequences) to encode our sentences into a sequence of their respective word-integers\n",
    "3. [tf.keras.preprocessing.sequence.pad_sequences()](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/sequence/pad_sequences) to pad all sequences to be the same length\n",
    "\n",
    "The embedding layer in the keras model takes care of one-hot encoding these integers and learning a dense emedding represetation from them. \n",
    "\n",
    "Finally we pass the embedded text representation through a CNN model pictured below\n"
    ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Run Locally (optional step)\n",
    "Let's make sure the code compiles by running locally for a fraction of an epoch.\n",
    "This may not work if you don't have all the packages installed locally for gcloud (such as in Colab).\n",
    "This is an optional step; move on to training on the cloud."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bash\n",
    "pip install google-cloud-storage\n",
    "rm -rf txtcls_trained\n",
    "gcloud ai-platform local train \\\n",
    "   --module-name=trainer.task \\\n",
    "   --package-path=${PWD}/txtclsmodel/trainer \\\n",
    "   -- \\\n",
    "   --output_dir=${PWD}/txtcls_trained \\\n",
    "   --train_data_path=${PWD}/data/txtcls/train.tsv \\\n",
    "   --eval_data_path=${PWD}/data/txtcls/eval.tsv \\\n",
    "   --num_epochs=0.1"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Train on the Cloud\n",
    "\n",
    "Let's first copy our training data to the cloud:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bash\n",
    "gcloud storage cp data/txtcls/*.tsv gs://${BUCKET}/txtcls/"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bash\n",
    "OUTDIR=gs://${BUCKET}/txtcls/trained_fromscratch\n",
    "JOBNAME=txtcls_$(date -u +%y%m%d_%H%M%S)\n",
    "gcloud storage rm --recursive --continue-on-error $OUTDIR\n",
    "gcloud ai-platform jobs submit training $JOBNAME \\\n",
    " --region=$REGION \\\n",
    " --module-name=trainer.task \\\n",
    " --package-path=${PWD}/txtclsmodel/trainer \\\n",
    " --job-dir=$OUTDIR \\\n",
    " --scale-tier=BASIC_GPU \\\n",
    " --runtime-version 2.3 \\\n",
    " --python-version 3.7 \\\n",
    " -- \\\n",
    " --output_dir=$OUTDIR \\\n",
    " --train_data_path=gs://${BUCKET}/txtcls/train.tsv \\\n",
    " --eval_data_path=gs://${BUCKET}/txtcls/eval.tsv \\\n",
    " --num_epochs=5"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Change the job name appropriately. View the job in the console, and wait until the job is complete."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!gcloud ai-platform jobs describe txtcls_190209_224828"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Results\n",
    "What accuracy did you get? You should see around 80%."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Rerun with Pre-trained Embedding\n",
    "\n",
    "We will use the popular GloVe embedding which is trained on Wikipedia as well as various news sources like the New York Times.\n",
    "\n",
    "You can read more about Glove at the project homepage: https://nlp.stanford.edu/projects/glove/\n",
    "\n",
    "You can download the embedding files directly from the stanford.edu site, but we've rehosted it in a GCS bucket for faster download speed."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 36,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Copying gs://cloud-training-demos/courses/machine_learning/deepdive/09_sequence/text_classification/glove.6B.200d.txt [Content-Type=text/plain]...\n",
      "- [1 files][661.3 MiB/661.3 MiB]      0.0 B/s                                   \n",
      "Operation completed over 1 objects/661.3 MiB.                                    \n"
     ]
    }
   ],
   "source": [
    "!gcloud storage cp gs://cloud-training-demos/courses/machine_learning/deepdive/09_sequence/text_classification/glove.6B.200d.txt gs://$BUCKET/txtcls/"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Once the embedding is downloaded re-run your cloud training job with the added command line argument: \n",
    "\n",
    "` --embedding_path=gs://${BUCKET}/txtcls/glove.6B.200d.txt`\n",
    "\n",
    "While the final accuracy may not change significantly, you should notice the model is able to converge to it much more quickly because it no longer has to learn an embedding from scratch."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### References\n",
    "- This implementation is based on code from: https://github.com/google/eng-edu/tree/master/ml/guides/text_classification.\n",
    "- See the full text classification tutorial at: https://developers.google.com/machine-learning/guides/text-classification/\n",
    "\n",
    "## Next step\n",
    "Client-side tokenizing in Python is hugely problematic. See <a href=\"text_classification_native.ipynb\">Text classification with native serving</a> for how to carry out the preprocessing in the serving function itself."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Copyright 2020 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.5.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
