{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Deep Learning Models -- A collection of various deep learning architectures, models, and tips for TensorFlow and PyTorch in Jupyter Notebooks.\n",
    "- Author: Sebastian Raschka\n",
    "- GitHub Repository: https://github.com/rasbt/deeplearning-models"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "vY4SK0xKAJgm"
   },
   "source": [
    "# Bidirectional Multi-layer RNN with LSTM with Own Dataset in CSV Format (Amazon Review Polarity)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Dataset Description\n",
    "\n",
    "```\n",
    "Amazon Review Polarity Dataset\n",
    "\n",
    "Version 3, Updated 09/09/2015\n",
    "\n",
    "ORIGIN\n",
    "\n",
    "The Amazon reviews dataset consists of reviews from amazon. The data span a period of 18 years, including ~35 million reviews up to March 2013. Reviews include product and user information, ratings, and a plaintext review. For more information, please refer to the following paper: J. McAuley and J. Leskovec. Hidden factors and hidden topics: understanding rating dimensions with review text. RecSys, 2013.\n",
    "\n",
    "The Amazon reviews polarity dataset is constructed by Xiang Zhang (xiang.zhang@nyu.edu) from the above dataset. It is used as a text classification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015).\n",
    "\n",
    "\n",
    "DESCRIPTION\n",
    "\n",
    "The Amazon reviews polarity dataset is constructed by taking review score 1 and 2 as negative, and 4 and 5 as positive. Samples of score 3 is ignored. In the dataset, class 1 is the negative and class 2 is the positive. Each class has 1,800,000 training samples and 200,000 testing samples.\n",
    "\n",
    "The files train.csv and test.csv contain all the training samples as comma-sparated values. There are 3 columns in them, corresponding to class index (1 or 2), review title and review text. The review title and text are escaped using double quotes (\"), and any internal double quote is escaped by 2 double quotes (\"\"). New lines are escaped by a backslash followed with an \"n\" character, that is \"\\n\".\n",
    "\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "moNmVfuvnImW"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Sebastian Raschka \n",
      "\n",
      "CPython 3.7.3\n",
      "IPython 7.9.0\n",
      "\n",
      "torch 1.3.0\n"
     ]
    }
   ],
   "source": [
    "%load_ext watermark\n",
    "%watermark -a 'Sebastian Raschka' -v -p torch\n",
    "\n",
    "\n",
    "import torch\n",
    "import torch.nn.functional as F\n",
    "from torchtext import data\n",
    "from torchtext import datasets\n",
    "import time\n",
    "import random\n",
    "import pandas as pd\n",
    "import numpy as np\n",
    "\n",
    "torch.backends.cudnn.deterministic = True"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "GSRL42Qgy8I8"
   },
   "source": [
    "## General Settings"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "OvW1RgfepCBq"
   },
   "outputs": [],
   "source": [
    "RANDOM_SEED = 123\n",
    "torch.manual_seed(RANDOM_SEED)\n",
    "\n",
    "VOCABULARY_SIZE = 5000\n",
    "LEARNING_RATE = 1e-3\n",
    "BATCH_SIZE = 128\n",
    "NUM_EPOCHS = 50\n",
    "DROPOUT = 0.5\n",
    "DEVICE = torch.device('cuda:2' if torch.cuda.is_available() else 'cpu')\n",
    "\n",
    "EMBEDDING_DIM = 128\n",
    "BIDIRECTIONAL = True\n",
    "HIDDEN_DIM = 256\n",
    "NUM_LAYERS = 2\n",
    "OUTPUT_DIM = 2"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "mQMmKUEisW4W"
   },
   "source": [
    "## Dataset"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The Yelp Review Polarity dataset is available from Xiang Zhang's Google Drive folder at\n",
    "\n",
    "https://drive.google.com/drive/u/0/folders/0Bz8a_Dbh9Qhbfll6bVpmNUtUcFdjYmF2SEpmZUZUcVNiMUw1TWN6RDV3a0JHT3kxLVhVR2M\n",
    "\n",
    "From the Google Drive folder, download the file \n",
    "\n",
    "- `amazon_review_polarity_csv.tar.gz`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "amazon_review_polarity_csv/\n",
      "amazon_review_polarity_csv/test.csv\n",
      "amazon_review_polarity_csv/train.csv\n",
      "amazon_review_polarity_csv/readme.txt\n"
     ]
    }
   ],
   "source": [
    "!tar xvzf amazon_review_polarity_csv.tar.gz"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Check that the dataset looks okay:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>classlabel</th>\n",
       "      <th>title</th>\n",
       "      <th>content</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>1</td>\n",
       "      <td>Stuning even for the non-gamer</td>\n",
       "      <td>This sound track was beautiful! It paints the ...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>1</td>\n",
       "      <td>The best soundtrack ever to anything.</td>\n",
       "      <td>I'm reading a lot of reviews saying that this ...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>1</td>\n",
       "      <td>Amazing!</td>\n",
       "      <td>This soundtrack is my favorite music of all ti...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>1</td>\n",
       "      <td>Excellent Soundtrack</td>\n",
       "      <td>I truly like this soundtrack and I enjoy video...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>1</td>\n",
       "      <td>Remember, Pull Your Jaw Off The Floor After He...</td>\n",
       "      <td>If you've played the game, you know how divine...</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "   classlabel                                              title  \\\n",
       "0           1                     Stuning even for the non-gamer   \n",
       "1           1              The best soundtrack ever to anything.   \n",
       "2           1                                           Amazing!   \n",
       "3           1                               Excellent Soundtrack   \n",
       "4           1  Remember, Pull Your Jaw Off The Floor After He...   \n",
       "\n",
       "                                             content  \n",
       "0  This sound track was beautiful! It paints the ...  \n",
       "1  I'm reading a lot of reviews saying that this ...  \n",
       "2  This soundtrack is my favorite music of all ti...  \n",
       "3  I truly like this soundtrack and I enjoy video...  \n",
       "4  If you've played the game, you know how divine...  "
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "df = pd.read_csv('amazon_review_polarity_csv/train.csv', header=None, index_col=None)\n",
    "df.columns = ['classlabel', 'title', 'content']\n",
    "df['classlabel'] = df['classlabel']-1\n",
    "df.head()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([0, 1])"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "np.unique(df['classlabel'].values)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([1800000, 1800000])"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "np.bincount(df['classlabel'])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "df[['classlabel', 'content']].to_csv('amazon_review_polarity_csv/train_prepocessed.csv', index=None)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>classlabel</th>\n",
       "      <th>title</th>\n",
       "      <th>content</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>1</td>\n",
       "      <td>Great CD</td>\n",
       "      <td>My lovely Pat has one of the GREAT voices of h...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>1</td>\n",
       "      <td>One of the best game music soundtracks - for a...</td>\n",
       "      <td>Despite the fact that I have only played a sma...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>0</td>\n",
       "      <td>Batteries died within a year ...</td>\n",
       "      <td>I bought this charger in Jul 2003 and it worke...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>1</td>\n",
       "      <td>works fine, but Maha Energy is better</td>\n",
       "      <td>Check out Maha Energy's website. Their Powerex...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>1</td>\n",
       "      <td>Great for the non-audiophile</td>\n",
       "      <td>Reviewed quite a bit of the combo players and ...</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "   classlabel                                              title  \\\n",
       "0           1                                           Great CD   \n",
       "1           1  One of the best game music soundtracks - for a...   \n",
       "2           0                   Batteries died within a year ...   \n",
       "3           1              works fine, but Maha Energy is better   \n",
       "4           1                       Great for the non-audiophile   \n",
       "\n",
       "                                             content  \n",
       "0  My lovely Pat has one of the GREAT voices of h...  \n",
       "1  Despite the fact that I have only played a sma...  \n",
       "2  I bought this charger in Jul 2003 and it worke...  \n",
       "3  Check out Maha Energy's website. Their Powerex...  \n",
       "4  Reviewed quite a bit of the combo players and ...  "
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "df = pd.read_csv('amazon_review_polarity_csv/test.csv', header=None, index_col=None)\n",
    "df.columns = ['classlabel', 'title', 'content']\n",
    "df['classlabel'] = df['classlabel']-1\n",
    "df.head()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([0, 1])"
      ]
     },
     "execution_count": 9,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "np.unique(df['classlabel'].values)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([200000, 200000])"
      ]
     },
     "execution_count": 10,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "np.bincount(df['classlabel'])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [],
   "source": [
    "df[['classlabel', 'content']].to_csv('amazon_review_polarity_csv/test_prepocessed.csv', index=None)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [],
   "source": [
    "del df"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "4GnH64XvsV8n"
   },
   "source": [
    "Define the Label and Text field formatters:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [],
   "source": [
    "TEXT = data.Field(sequential=True,\n",
    "                  tokenize='spacy',\n",
    "                  include_lengths=True) # necessary for packed_padded_sequence\n",
    "\n",
    "LABEL = data.LabelField(dtype=torch.float)\n",
    "\n",
    "\n",
    "# If you get an error [E050] Can't find model 'en'\n",
    "# you need to run the following on your command line:\n",
    "#  python -m spacy download en"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Process the dataset:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [],
   "source": [
    "fields = [('classlabel', LABEL), ('content', TEXT)]\n",
    "\n",
    "train_dataset = data.TabularDataset(\n",
    "    path=\"amazon_review_polarity_csv/train_prepocessed.csv\", format='csv',\n",
    "    skip_header=True, fields=fields)\n",
    "\n",
    "test_dataset = data.TabularDataset(\n",
    "    path=\"amazon_review_polarity_csv/test_prepocessed.csv\", format='csv',\n",
    "    skip_header=True, fields=fields)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Split the training dataset into training and validation:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 68
    },
    "colab_type": "code",
    "id": "WZ_4jiHVnMxN",
    "outputId": "dfa51c04-4845-44c3-f50b-d36d41f132b8"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Num Train: 3420000\n",
      "Num Valid: 180000\n"
     ]
    }
   ],
   "source": [
    "train_data, valid_data = train_dataset.split(\n",
    "    split_ratio=[0.95, 0.05],\n",
    "    random_state=random.seed(RANDOM_SEED))\n",
    "\n",
    "print(f'Num Train: {len(train_data)}')\n",
    "print(f'Num Valid: {len(valid_data)}')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "L-TBwKWPslPa"
   },
   "source": [
    "Build the vocabulary based on the top \"VOCABULARY_SIZE\" words:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 51
    },
    "colab_type": "code",
    "id": "e8uNrjdtn4A8",
    "outputId": "6cf499d7-7722-4da0-8576-ee0f218cc6e3"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Vocabulary size: 5002\n",
      "Number of classes: 2\n"
     ]
    }
   ],
   "source": [
    "TEXT.build_vocab(train_data,\n",
    "                 max_size=VOCABULARY_SIZE,\n",
    "                 vectors='glove.6B.100d',\n",
    "                 unk_init=torch.Tensor.normal_)\n",
    "\n",
    "LABEL.build_vocab(train_data)\n",
    "\n",
    "print(f'Vocabulary size: {len(TEXT.vocab)}')\n",
    "print(f'Number of classes: {len(LABEL.vocab)}')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['1', '0']"
      ]
     },
     "execution_count": 17,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "list(LABEL.vocab.freqs)[-10:]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "JpEMNInXtZsb"
   },
   "source": [
    "The TEXT.vocab dictionary will contain the word counts and indices. The reason why the number of words is VOCABULARY_SIZE + 2 is that it contains to special tokens for padding and unknown words: `<unk>` and `<pad>`."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "eIQ_zfKLwjKm"
   },
   "source": [
    "Make dataset iterators:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "i7JiHR1stHNF"
   },
   "outputs": [],
   "source": [
    "train_loader, valid_loader, test_loader = data.BucketIterator.splits(\n",
    "    (train_data, valid_data, test_dataset), \n",
    "    batch_size=BATCH_SIZE,\n",
    "    sort_within_batch=True, # necessary for packed_padded_sequence\n",
    "    sort_key=lambda x: len(x.content),\n",
    "    device=DEVICE)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "R0pT_dMRvicQ"
   },
   "source": [
    "Testing the iterators (note that the number of rows depends on the longest document in the respective batch):"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 204
    },
    "colab_type": "code",
    "id": "y8SP_FccutT0",
    "outputId": "fe33763a-4560-4dee-adee-31cc6c48b0b2"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Train\n",
      "Text matrix size: torch.Size([74, 128])\n",
      "Target vector size: torch.Size([128])\n",
      "\n",
      "Valid:\n",
      "Text matrix size: torch.Size([14, 128])\n",
      "Target vector size: torch.Size([128])\n",
      "\n",
      "Test:\n",
      "Text matrix size: torch.Size([12, 128])\n",
      "Target vector size: torch.Size([128])\n"
     ]
    }
   ],
   "source": [
    "print('Train')\n",
    "for batch in train_loader:\n",
    "    print(f'Text matrix size: {batch.content[0].size()}')\n",
    "    print(f'Target vector size: {batch.classlabel.size()}')\n",
    "    break\n",
    "    \n",
    "print('\\nValid:')\n",
    "for batch in valid_loader:\n",
    "    print(f'Text matrix size: {batch.content[0].size()}')\n",
    "    print(f'Target vector size: {batch.classlabel.size()}')\n",
    "    break\n",
    "    \n",
    "print('\\nTest:')\n",
    "for batch in test_loader:\n",
    "    print(f'Text matrix size: {batch.content[0].size()}')\n",
    "    print(f'Target vector size: {batch.classlabel.size()}')\n",
    "    break"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "G_grdW3pxCzz"
   },
   "source": [
    "## Model"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "nQIUm5EjxFNa"
   },
   "outputs": [],
   "source": [
    "import torch.nn as nn\n",
    "\n",
    "\n",
    "class RNN(nn.Module):\n",
    "    def __init__(self, input_dim, embedding_dim, bidirectional, hidden_dim, num_layers, output_dim, dropout, pad_idx):\n",
    "        \n",
    "        super().__init__()\n",
    "        \n",
    "        self.embedding = nn.Embedding(input_dim, embedding_dim, padding_idx=pad_idx)\n",
    "        self.rnn = nn.LSTM(embedding_dim, \n",
    "                           hidden_dim,\n",
    "                           num_layers=num_layers,\n",
    "                           bidirectional=bidirectional, \n",
    "                           dropout=dropout)\n",
    "        self.fc1 = nn.Linear(hidden_dim * num_layers, 64)\n",
    "        self.fc2 = nn.Linear(64, output_dim)\n",
    "        self.dropout = nn.Dropout(dropout)\n",
    "        \n",
    "    def forward(self, text, text_length):\n",
    "\n",
    "        embedded = self.dropout(self.embedding(text))\n",
    "        packed_embedded = nn.utils.rnn.pack_padded_sequence(embedded, text_length)\n",
    "        packed_output, (hidden, cell) = self.rnn(packed_embedded)\n",
    "        output, output_lengths = nn.utils.rnn.pad_packed_sequence(packed_output)\n",
    "        hidden = self.dropout(torch.cat((hidden[-2,:,:], hidden[-1,:,:]), dim=1))\n",
    "        hidden = self.fc1(hidden)\n",
    "        return hidden"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "Ik3NF3faxFmZ"
   },
   "outputs": [],
   "source": [
    "INPUT_DIM = len(TEXT.vocab)\n",
    "\n",
    "PAD_IDX = TEXT.vocab.stoi[TEXT.pad_token]\n",
    "\n",
    "torch.manual_seed(RANDOM_SEED)\n",
    "model = RNN(INPUT_DIM, EMBEDDING_DIM, BIDIRECTIONAL, HIDDEN_DIM, NUM_LAYERS, OUTPUT_DIM, DROPOUT, PAD_IDX)\n",
    "model = model.to(DEVICE)\n",
    "optimizer = torch.optim.Adam(model.parameters(), lr=LEARNING_RATE)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "Lv9Ny9di6VcI"
   },
   "source": [
    "## Training"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "T5t1Afn4xO11"
   },
   "outputs": [],
   "source": [
    "def compute_accuracy(model, data_loader, device):\n",
    "    model.eval()\n",
    "    correct_pred, num_examples = 0, 0\n",
    "    with torch.no_grad():\n",
    "        for batch_idx, batch_data in enumerate(data_loader):\n",
    "            text, text_lengths = batch_data.content\n",
    "            logits = model(text, text_lengths).squeeze(1)\n",
    "            _, predicted_labels = torch.max(logits, 1)\n",
    "            num_examples += batch_data.classlabel.size(0)\n",
    "            correct_pred += (predicted_labels.long() == batch_data.classlabel.long()).sum()\n",
    "        return correct_pred.float()/num_examples * 100"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 1836
    },
    "colab_type": "code",
    "id": "EABZM8Vo0ilB",
    "outputId": "5d45e293-9909-4588-e793-8dfaf72e5c67"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch: 001/050 | Batch 000/26719 | Cost: 4.1805\n",
      "Epoch: 001/050 | Batch 10000/26719 | Cost: 0.2005\n",
      "Epoch: 001/050 | Batch 20000/26719 | Cost: 0.1998\n",
      "training accuracy: 93.34%\n",
      "valid accuracy: 93.27%\n",
      "Time elapsed: 33.40 min\n",
      "Epoch: 002/050 | Batch 000/26719 | Cost: 0.1659\n",
      "Epoch: 002/050 | Batch 10000/26719 | Cost: 0.1326\n",
      "Epoch: 002/050 | Batch 20000/26719 | Cost: 0.1470\n",
      "training accuracy: 93.82%\n",
      "valid accuracy: 93.63%\n",
      "Time elapsed: 66.69 min\n",
      "Epoch: 003/050 | Batch 000/26719 | Cost: 0.1256\n",
      "Epoch: 003/050 | Batch 10000/26719 | Cost: 0.1980\n",
      "Epoch: 003/050 | Batch 20000/26719 | Cost: 0.2041\n",
      "training accuracy: 93.98%\n",
      "valid accuracy: 93.82%\n",
      "Time elapsed: 100.02 min\n",
      "Epoch: 004/050 | Batch 000/26719 | Cost: 0.2103\n",
      "Epoch: 004/050 | Batch 10000/26719 | Cost: 0.1100\n",
      "Epoch: 004/050 | Batch 20000/26719 | Cost: 0.1851\n",
      "training accuracy: 94.11%\n",
      "valid accuracy: 93.93%\n",
      "Time elapsed: 133.32 min\n",
      "Epoch: 005/050 | Batch 000/26719 | Cost: 0.2196\n",
      "Epoch: 005/050 | Batch 10000/26719 | Cost: 0.1209\n",
      "Epoch: 005/050 | Batch 20000/26719 | Cost: 0.2147\n",
      "training accuracy: 94.13%\n",
      "valid accuracy: 93.93%\n",
      "Time elapsed: 166.67 min\n",
      "Epoch: 006/050 | Batch 000/26719 | Cost: 0.1908\n",
      "Epoch: 006/050 | Batch 10000/26719 | Cost: 0.2187\n",
      "Epoch: 006/050 | Batch 20000/26719 | Cost: 0.2253\n",
      "training accuracy: 94.15%\n",
      "valid accuracy: 93.93%\n",
      "Time elapsed: 199.87 min\n",
      "Epoch: 007/050 | Batch 000/26719 | Cost: 0.1990\n",
      "Epoch: 007/050 | Batch 10000/26719 | Cost: 0.1928\n",
      "Epoch: 007/050 | Batch 20000/26719 | Cost: 0.2113\n",
      "training accuracy: 94.21%\n",
      "valid accuracy: 93.97%\n",
      "Time elapsed: 233.25 min\n",
      "Epoch: 008/050 | Batch 000/26719 | Cost: 0.1753\n",
      "Epoch: 008/050 | Batch 10000/26719 | Cost: 0.1708\n",
      "Epoch: 008/050 | Batch 20000/26719 | Cost: 0.2158\n",
      "training accuracy: 94.21%\n",
      "valid accuracy: 93.97%\n",
      "Time elapsed: 266.51 min\n",
      "Epoch: 009/050 | Batch 000/26719 | Cost: 0.2423\n",
      "Epoch: 009/050 | Batch 10000/26719 | Cost: 0.1097\n",
      "Epoch: 009/050 | Batch 20000/26719 | Cost: 0.1727\n",
      "training accuracy: 94.18%\n",
      "valid accuracy: 93.98%\n",
      "Time elapsed: 299.86 min\n",
      "Epoch: 010/050 | Batch 000/26719 | Cost: 0.1474\n",
      "Epoch: 010/050 | Batch 10000/26719 | Cost: 0.2041\n",
      "Epoch: 010/050 | Batch 20000/26719 | Cost: 0.1127\n",
      "training accuracy: 94.13%\n",
      "valid accuracy: 93.91%\n",
      "Time elapsed: 333.10 min\n",
      "Epoch: 011/050 | Batch 000/26719 | Cost: 0.1643\n",
      "Epoch: 011/050 | Batch 10000/26719 | Cost: 0.1772\n",
      "Epoch: 011/050 | Batch 20000/26719 | Cost: 0.1586\n",
      "training accuracy: 94.13%\n",
      "valid accuracy: 93.92%\n",
      "Time elapsed: 366.48 min\n",
      "Epoch: 012/050 | Batch 000/26719 | Cost: 0.1335\n",
      "Epoch: 012/050 | Batch 10000/26719 | Cost: 0.1680\n",
      "Epoch: 012/050 | Batch 20000/26719 | Cost: 0.1775\n",
      "training accuracy: 94.04%\n",
      "valid accuracy: 93.80%\n",
      "Time elapsed: 399.85 min\n",
      "Epoch: 013/050 | Batch 000/26719 | Cost: 0.1896\n",
      "Epoch: 013/050 | Batch 10000/26719 | Cost: 0.0957\n",
      "Epoch: 013/050 | Batch 20000/26719 | Cost: 0.1700\n",
      "training accuracy: 94.02%\n",
      "valid accuracy: 93.80%\n",
      "Time elapsed: 432.30 min\n",
      "Epoch: 014/050 | Batch 000/26719 | Cost: 0.1370\n",
      "Epoch: 014/050 | Batch 10000/26719 | Cost: 0.1449\n",
      "Epoch: 014/050 | Batch 20000/26719 | Cost: 0.1874\n",
      "training accuracy: 93.96%\n",
      "valid accuracy: 93.80%\n",
      "Time elapsed: 463.91 min\n",
      "Epoch: 015/050 | Batch 000/26719 | Cost: 0.1289\n",
      "Epoch: 015/050 | Batch 10000/26719 | Cost: 0.1852\n",
      "Epoch: 015/050 | Batch 20000/26719 | Cost: 0.1166\n",
      "training accuracy: 93.79%\n",
      "valid accuracy: 93.64%\n",
      "Time elapsed: 495.59 min\n",
      "Epoch: 016/050 | Batch 000/26719 | Cost: 0.1109\n",
      "Epoch: 016/050 | Batch 10000/26719 | Cost: 0.1259\n",
      "Epoch: 016/050 | Batch 20000/26719 | Cost: 0.1309\n",
      "training accuracy: 93.75%\n",
      "valid accuracy: 93.58%\n",
      "Time elapsed: 527.20 min\n",
      "Epoch: 017/050 | Batch 000/26719 | Cost: 0.2273\n",
      "Epoch: 017/050 | Batch 10000/26719 | Cost: 0.1037\n",
      "Epoch: 017/050 | Batch 20000/26719 | Cost: 0.1274\n",
      "training accuracy: 93.58%\n",
      "valid accuracy: 93.43%\n",
      "Time elapsed: 558.80 min\n",
      "Epoch: 018/050 | Batch 000/26719 | Cost: 0.1924\n",
      "Epoch: 018/050 | Batch 10000/26719 | Cost: 0.1870\n",
      "Epoch: 018/050 | Batch 20000/26719 | Cost: 0.2183\n",
      "training accuracy: 93.61%\n",
      "valid accuracy: 93.51%\n",
      "Time elapsed: 590.48 min\n",
      "Epoch: 019/050 | Batch 000/26719 | Cost: 0.1955\n",
      "Epoch: 019/050 | Batch 10000/26719 | Cost: 0.1745\n",
      "Epoch: 019/050 | Batch 20000/26719 | Cost: 0.1339\n",
      "training accuracy: 93.49%\n",
      "valid accuracy: 93.43%\n",
      "Time elapsed: 622.06 min\n",
      "Epoch: 020/050 | Batch 000/26719 | Cost: 0.1498\n",
      "Epoch: 020/050 | Batch 10000/26719 | Cost: 0.2582\n",
      "Epoch: 020/050 | Batch 20000/26719 | Cost: 0.2263\n",
      "training accuracy: 93.41%\n",
      "valid accuracy: 93.32%\n",
      "Time elapsed: 653.69 min\n",
      "Epoch: 021/050 | Batch 000/26719 | Cost: 0.2266\n",
      "Epoch: 021/050 | Batch 10000/26719 | Cost: 0.1824\n",
      "Epoch: 021/050 | Batch 20000/26719 | Cost: 0.2128\n",
      "training accuracy: 93.32%\n",
      "valid accuracy: 93.18%\n",
      "Time elapsed: 685.43 min\n",
      "Epoch: 022/050 | Batch 000/26719 | Cost: 0.1637\n",
      "Epoch: 022/050 | Batch 10000/26719 | Cost: 0.2462\n",
      "Epoch: 022/050 | Batch 20000/26719 | Cost: 0.1890\n",
      "training accuracy: 93.24%\n",
      "valid accuracy: 93.13%\n",
      "Time elapsed: 716.98 min\n",
      "Epoch: 023/050 | Batch 000/26719 | Cost: 0.2072\n",
      "Epoch: 023/050 | Batch 10000/26719 | Cost: 0.1904\n",
      "Epoch: 023/050 | Batch 20000/26719 | Cost: 0.2408\n",
      "training accuracy: 93.13%\n",
      "valid accuracy: 93.02%\n",
      "Time elapsed: 748.55 min\n",
      "Epoch: 024/050 | Batch 000/26719 | Cost: 0.1655\n",
      "Epoch: 024/050 | Batch 10000/26719 | Cost: 0.2909\n",
      "Epoch: 024/050 | Batch 20000/26719 | Cost: 0.1979\n",
      "training accuracy: 93.05%\n",
      "valid accuracy: 92.97%\n",
      "Time elapsed: 780.21 min\n",
      "Epoch: 025/050 | Batch 000/26719 | Cost: 0.1742\n",
      "Epoch: 025/050 | Batch 10000/26719 | Cost: 0.2666\n",
      "Epoch: 025/050 | Batch 20000/26719 | Cost: 0.2489\n",
      "training accuracy: 92.97%\n",
      "valid accuracy: 92.84%\n",
      "Time elapsed: 811.86 min\n",
      "Epoch: 026/050 | Batch 000/26719 | Cost: 0.2000\n",
      "Epoch: 026/050 | Batch 10000/26719 | Cost: 0.1438\n",
      "Epoch: 026/050 | Batch 20000/26719 | Cost: 0.1771\n",
      "training accuracy: 92.80%\n",
      "valid accuracy: 92.69%\n",
      "Time elapsed: 843.59 min\n",
      "Epoch: 027/050 | Batch 000/26719 | Cost: 0.1902\n",
      "Epoch: 027/050 | Batch 10000/26719 | Cost: 0.1842\n",
      "Epoch: 027/050 | Batch 20000/26719 | Cost: 0.2043\n",
      "training accuracy: 92.93%\n",
      "valid accuracy: 92.85%\n",
      "Time elapsed: 875.26 min\n",
      "Epoch: 028/050 | Batch 000/26719 | Cost: 0.1836\n",
      "Epoch: 028/050 | Batch 10000/26719 | Cost: 0.1861\n",
      "Epoch: 028/050 | Batch 20000/26719 | Cost: 0.1953\n",
      "training accuracy: 92.85%\n",
      "valid accuracy: 92.76%\n",
      "Time elapsed: 906.92 min\n",
      "Epoch: 029/050 | Batch 000/26719 | Cost: 0.2089\n",
      "Epoch: 029/050 | Batch 10000/26719 | Cost: 0.2378\n",
      "Epoch: 029/050 | Batch 20000/26719 | Cost: 0.1476\n",
      "training accuracy: 92.84%\n",
      "valid accuracy: 92.74%\n",
      "Time elapsed: 938.51 min\n",
      "Epoch: 030/050 | Batch 000/26719 | Cost: 0.1816\n",
      "Epoch: 030/050 | Batch 10000/26719 | Cost: 0.2420\n",
      "Epoch: 030/050 | Batch 20000/26719 | Cost: 0.1891\n",
      "training accuracy: 92.73%\n",
      "valid accuracy: 92.63%\n",
      "Time elapsed: 970.14 min\n",
      "Epoch: 031/050 | Batch 000/26719 | Cost: 0.1959\n",
      "Epoch: 031/050 | Batch 10000/26719 | Cost: 0.2809\n",
      "Epoch: 031/050 | Batch 20000/26719 | Cost: 0.2692\n",
      "training accuracy: 92.65%\n",
      "valid accuracy: 92.63%\n",
      "Time elapsed: 1001.72 min\n",
      "Epoch: 032/050 | Batch 000/26719 | Cost: 0.1845\n",
      "Epoch: 032/050 | Batch 10000/26719 | Cost: 0.2390\n",
      "Epoch: 032/050 | Batch 20000/26719 | Cost: 0.1673\n",
      "training accuracy: 92.54%\n",
      "valid accuracy: 92.50%\n",
      "Time elapsed: 1033.34 min\n",
      "Epoch: 033/050 | Batch 000/26719 | Cost: 0.1612\n",
      "Epoch: 033/050 | Batch 10000/26719 | Cost: 0.2473\n",
      "Epoch: 033/050 | Batch 20000/26719 | Cost: 0.2368\n",
      "training accuracy: 92.52%\n",
      "valid accuracy: 92.43%\n",
      "Time elapsed: 1064.98 min\n",
      "Epoch: 034/050 | Batch 000/26719 | Cost: 0.1739\n",
      "Epoch: 034/050 | Batch 10000/26719 | Cost: 0.2465\n",
      "Epoch: 034/050 | Batch 20000/26719 | Cost: 0.2751\n",
      "training accuracy: 92.43%\n",
      "valid accuracy: 92.35%\n",
      "Time elapsed: 1096.60 min\n",
      "Epoch: 035/050 | Batch 000/26719 | Cost: 0.1641\n",
      "Epoch: 035/050 | Batch 10000/26719 | Cost: 0.2993\n",
      "Epoch: 035/050 | Batch 20000/26719 | Cost: 0.2110\n",
      "training accuracy: 92.44%\n",
      "valid accuracy: 92.38%\n",
      "Time elapsed: 1128.23 min\n",
      "Epoch: 036/050 | Batch 000/26719 | Cost: 0.1998\n",
      "Epoch: 036/050 | Batch 10000/26719 | Cost: 0.4061\n",
      "Epoch: 036/050 | Batch 20000/26719 | Cost: 0.3348\n",
      "training accuracy: 92.34%\n",
      "valid accuracy: 92.23%\n",
      "Time elapsed: 1159.86 min\n",
      "Epoch: 037/050 | Batch 000/26719 | Cost: 0.2720\n",
      "Epoch: 037/050 | Batch 10000/26719 | Cost: 0.1884\n",
      "Epoch: 037/050 | Batch 20000/26719 | Cost: 0.2429\n",
      "training accuracy: 92.38%\n",
      "valid accuracy: 92.35%\n",
      "Time elapsed: 1191.48 min\n",
      "Epoch: 038/050 | Batch 000/26719 | Cost: 0.1869\n",
      "Epoch: 038/050 | Batch 10000/26719 | Cost: 0.3093\n",
      "Epoch: 038/050 | Batch 20000/26719 | Cost: 0.2258\n",
      "training accuracy: 92.32%\n",
      "valid accuracy: 92.33%\n",
      "Time elapsed: 1223.13 min\n",
      "Epoch: 039/050 | Batch 000/26719 | Cost: 0.2780\n",
      "Epoch: 039/050 | Batch 10000/26719 | Cost: 0.2481\n",
      "Epoch: 039/050 | Batch 20000/26719 | Cost: 0.2593\n",
      "training accuracy: 92.34%\n",
      "valid accuracy: 92.31%\n",
      "Time elapsed: 1254.79 min\n",
      "Epoch: 040/050 | Batch 000/26719 | Cost: 0.1992\n",
      "Epoch: 040/050 | Batch 10000/26719 | Cost: 0.2254\n",
      "Epoch: 040/050 | Batch 20000/26719 | Cost: 0.2145\n",
      "training accuracy: 92.31%\n",
      "valid accuracy: 92.25%\n",
      "Time elapsed: 1286.39 min\n",
      "Epoch: 041/050 | Batch 000/26719 | Cost: 0.1949\n",
      "Epoch: 041/050 | Batch 10000/26719 | Cost: 0.2056\n",
      "Epoch: 041/050 | Batch 20000/26719 | Cost: 0.2562\n",
      "training accuracy: 92.15%\n",
      "valid accuracy: 92.10%\n",
      "Time elapsed: 1318.01 min\n",
      "Epoch: 042/050 | Batch 000/26719 | Cost: 0.2261\n",
      "Epoch: 042/050 | Batch 10000/26719 | Cost: 0.2665\n",
      "Epoch: 042/050 | Batch 20000/26719 | Cost: 0.2810\n",
      "training accuracy: 91.95%\n",
      "valid accuracy: 91.88%\n",
      "Time elapsed: 1349.75 min\n",
      "Epoch: 043/050 | Batch 000/26719 | Cost: 0.2078\n",
      "Epoch: 043/050 | Batch 10000/26719 | Cost: 0.2598\n",
      "Epoch: 043/050 | Batch 20000/26719 | Cost: 0.2550\n",
      "training accuracy: 92.00%\n",
      "valid accuracy: 91.96%\n",
      "Time elapsed: 1381.34 min\n",
      "Epoch: 044/050 | Batch 000/26719 | Cost: 0.1947\n",
      "Epoch: 044/050 | Batch 10000/26719 | Cost: 0.2332\n",
      "Epoch: 044/050 | Batch 20000/26719 | Cost: 0.3156\n",
      "training accuracy: 91.84%\n",
      "valid accuracy: 91.83%\n",
      "Time elapsed: 1412.81 min\n",
      "Epoch: 045/050 | Batch 000/26719 | Cost: 0.2643\n",
      "Epoch: 045/050 | Batch 10000/26719 | Cost: 0.2745\n",
      "Epoch: 045/050 | Batch 20000/26719 | Cost: 0.3741\n",
      "training accuracy: 91.98%\n",
      "valid accuracy: 91.94%\n",
      "Time elapsed: 1444.41 min\n",
      "Epoch: 046/050 | Batch 000/26719 | Cost: 0.2029\n",
      "Epoch: 046/050 | Batch 10000/26719 | Cost: 0.2028\n",
      "Epoch: 046/050 | Batch 20000/26719 | Cost: 0.2525\n",
      "training accuracy: 91.84%\n",
      "valid accuracy: 91.86%\n",
      "Time elapsed: 1476.07 min\n",
      "Epoch: 047/050 | Batch 000/26719 | Cost: 0.2104\n",
      "Epoch: 047/050 | Batch 10000/26719 | Cost: 0.1793\n",
      "Epoch: 047/050 | Batch 20000/26719 | Cost: 0.2022\n",
      "training accuracy: 91.75%\n",
      "valid accuracy: 91.73%\n",
      "Time elapsed: 1507.73 min\n",
      "Epoch: 048/050 | Batch 000/26719 | Cost: 0.3482\n",
      "Epoch: 048/050 | Batch 10000/26719 | Cost: 0.2211\n",
      "Epoch: 048/050 | Batch 20000/26719 | Cost: 0.2857\n",
      "training accuracy: 91.62%\n",
      "valid accuracy: 91.56%\n",
      "Time elapsed: 1539.42 min\n",
      "Epoch: 049/050 | Batch 000/26719 | Cost: 0.2514\n",
      "Epoch: 049/050 | Batch 10000/26719 | Cost: 0.2387\n",
      "Epoch: 049/050 | Batch 20000/26719 | Cost: 0.2515\n",
      "training accuracy: 91.54%\n",
      "valid accuracy: 91.47%\n",
      "Time elapsed: 1571.06 min\n",
      "Epoch: 050/050 | Batch 000/26719 | Cost: 0.2802\n",
      "Epoch: 050/050 | Batch 10000/26719 | Cost: 0.3489\n",
      "Epoch: 050/050 | Batch 20000/26719 | Cost: 0.2609\n",
      "training accuracy: 91.49%\n",
      "valid accuracy: 91.40%\n",
      "Time elapsed: 1602.62 min\n",
      "Total Training Time: 1602.62 min\n",
      "Test accuracy: 91.36%\n"
     ]
    }
   ],
   "source": [
    "start_time = time.time()\n",
    "\n",
    "for epoch in range(NUM_EPOCHS):\n",
    "    model.train()\n",
    "    for batch_idx, batch_data in enumerate(train_loader):\n",
    "        \n",
    "        text, text_lengths = batch_data.content\n",
    "        \n",
    "        ### FORWARD AND BACK PROP\n",
    "        logits = model(text, text_lengths).squeeze(1)\n",
    "        cost = F.cross_entropy(logits, batch_data.classlabel.long())\n",
    "        optimizer.zero_grad()\n",
    "        \n",
    "        cost.backward()\n",
    "        \n",
    "        ### UPDATE MODEL PARAMETERS\n",
    "        optimizer.step()\n",
    "        \n",
    "        ### LOGGING\n",
    "        if not batch_idx % 10000:\n",
    "            print (f'Epoch: {epoch+1:03d}/{NUM_EPOCHS:03d} | '\n",
    "                   f'Batch {batch_idx:03d}/{len(train_loader):03d} | '\n",
    "                   f'Cost: {cost:.4f}')\n",
    "\n",
    "    with torch.set_grad_enabled(False):\n",
    "        print(f'training accuracy: '\n",
    "              f'{compute_accuracy(model, train_loader, DEVICE):.2f}%'\n",
    "              f'\\nvalid accuracy: '\n",
    "              f'{compute_accuracy(model, valid_loader, DEVICE):.2f}%')\n",
    "        \n",
    "    print(f'Time elapsed: {(time.time() - start_time)/60:.2f} min')\n",
    "    \n",
    "print(f'Total Training Time: {(time.time() - start_time)/60:.2f} min')\n",
    "print(f'Test accuracy: {compute_accuracy(model, test_loader, DEVICE):.2f}%')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "7lRusB3dF80X"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "spacy     2.2.3\n",
      "pandas    0.24.2\n",
      "torchtext 0.4.0\n",
      "numpy     1.17.2\n",
      "torch     1.3.0\n",
      "\n"
     ]
    }
   ],
   "source": [
    "%watermark -iv"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {},
   "outputs": [],
   "source": [
    "torch.save(model.state_dict(), 'rnn_bi_multilayer_lstm_own_csv_amazon-polarity.pt')"
   ]
  }
 ],
 "metadata": {
  "colab": {
   "collapsed_sections": [],
   "name": "rnn_lstm_packed_imdb.ipynb",
   "provenance": [],
   "version": "0.3.2"
  },
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
