{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Deep Learning Models -- A collection of various deep learning architectures, models, and tips for TensorFlow and PyTorch in Jupyter Notebooks.\n",
    "- Author: Sebastian Raschka\n",
    "- GitHub Repository: https://github.com/rasbt/deeplearning-models"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "vY4SK0xKAJgm"
   },
   "source": [
    "# Bidirectional Multi-layer RNN with LSTM with Own Dataset in CSV Format (Yelp Review Polarity)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Dataset Description\n",
    "\n",
    "```\n",
    "Yelp Review Polarity Dataset\n",
    "\n",
    "Version 1, Updated 09/09/2015\n",
    "\n",
    "ORIGIN\n",
    "\n",
    "The Yelp reviews dataset consists of reviews from Yelp. It is extracted from the Yelp Dataset Challenge 2015 data. For more information, please refer to http://www.yelp.com/dataset_challenge\n",
    "\n",
    "The Yelp reviews polarity dataset is constructed by Xiang Zhang (xiang.zhang@nyu.edu) from the above dataset. It is first used as a text classification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015).\n",
    "\n",
    "\n",
    "DESCRIPTION\n",
    "\n",
    "The Yelp reviews polarity dataset is constructed by considering stars 1 and 2 negative, and 3 and 4 positive. For each polarity 280,000 training samples and 19,000 testing samples are take randomly. In total there are 560,000 trainig samples and 38,000 testing samples. Negative polarity is class 1, and positive class 2.\n",
    "\n",
    "The files train.csv and test.csv contain all the training samples as comma-sparated values. There are 2 columns in them, corresponding to class index (1 and 2) and review text. The review texts are escaped using double quotes (\"), and any internal double quote is escaped by 2 double quotes (\"\"). New lines are escaped by a backslash followed with an \"n\" character, that is \"\\n\".backslash followed with an \"n\" character, that is \"\\n\".\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "moNmVfuvnImW"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Sebastian Raschka \n",
      "\n",
      "CPython 3.7.3\n",
      "IPython 7.9.0\n",
      "\n",
      "torch 1.3.0\n"
     ]
    }
   ],
   "source": [
    "%load_ext watermark\n",
    "%watermark -a 'Sebastian Raschka' -v -p torch\n",
    "\n",
    "\n",
    "import torch\n",
    "import torch.nn.functional as F\n",
    "from torchtext import data\n",
    "from torchtext import datasets\n",
    "import time\n",
    "import random\n",
    "import pandas as pd\n",
    "import numpy as np\n",
    "\n",
    "torch.backends.cudnn.deterministic = True"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "GSRL42Qgy8I8"
   },
   "source": [
    "## General Settings"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "OvW1RgfepCBq"
   },
   "outputs": [],
   "source": [
    "RANDOM_SEED = 123\n",
    "torch.manual_seed(RANDOM_SEED)\n",
    "\n",
    "VOCABULARY_SIZE = 5000\n",
    "LEARNING_RATE = 1e-3\n",
    "BATCH_SIZE = 128\n",
    "NUM_EPOCHS = 50\n",
    "DROPOUT = 0.5\n",
    "DEVICE = torch.device('cuda:3' if torch.cuda.is_available() else 'cpu')\n",
    "\n",
    "EMBEDDING_DIM = 128\n",
    "BIDIRECTIONAL = True\n",
    "HIDDEN_DIM = 256\n",
    "NUM_LAYERS = 2\n",
    "OUTPUT_DIM = 2"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "mQMmKUEisW4W"
   },
   "source": [
    "## Dataset"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The Yelp Review Polarity dataset is available from Xiang Zhang's Google Drive folder at\n",
    "\n",
    "https://drive.google.com/drive/u/0/folders/0Bz8a_Dbh9Qhbfll6bVpmNUtUcFdjYmF2SEpmZUZUcVNiMUw1TWN6RDV3a0JHT3kxLVhVR2M\n",
    "\n",
    "From the Google Drive folder, download the file \n",
    "\n",
    "- `yelp_review_polarity_csv.tar.gz`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "yelp_review_polarity_csv/\n",
      "yelp_review_polarity_csv/readme.txt\n",
      "yelp_review_polarity_csv/test.csv\n",
      "yelp_review_polarity_csv/train.csv\n"
     ]
    }
   ],
   "source": [
    "!tar xvzf  yelp_review_polarity_csv.tar.gz"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Check that the dataset looks okay:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>classlabel</th>\n",
       "      <th>content</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>0</td>\n",
       "      <td>Unfortunately, the frustration of being Dr. Go...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>1</td>\n",
       "      <td>Been going to Dr. Goldberg for over 10 years. ...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>0</td>\n",
       "      <td>I don't know what Dr. Goldberg was like before...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>0</td>\n",
       "      <td>I'm writing this review to give you a heads up...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>1</td>\n",
       "      <td>All the food is great here. But the best thing...</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "   classlabel                                            content\n",
       "0           0  Unfortunately, the frustration of being Dr. Go...\n",
       "1           1  Been going to Dr. Goldberg for over 10 years. ...\n",
       "2           0  I don't know what Dr. Goldberg was like before...\n",
       "3           0  I'm writing this review to give you a heads up...\n",
       "4           1  All the food is great here. But the best thing..."
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "df = pd.read_csv('yelp_review_polarity_csv/train.csv', header=None, index_col=None)\n",
    "df.columns = ['classlabel', 'content']\n",
    "df['classlabel'] = df['classlabel']-1\n",
    "df.head()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([0, 1])"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "np.unique(df['classlabel'].values)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([280000, 280000])"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "np.bincount(df['classlabel'])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "df[['classlabel', 'content']].to_csv('yelp_review_polarity_csv/train_prepocessed.csv', index=None)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>classlabel</th>\n",
       "      <th>content</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>1</td>\n",
       "      <td>Contrary to other reviews, I have zero complai...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>0</td>\n",
       "      <td>Last summer I had an appointment to get new ti...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>1</td>\n",
       "      <td>Friendly staff, same starbucks fair you get an...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>0</td>\n",
       "      <td>The food is good. Unfortunately the service is...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>1</td>\n",
       "      <td>Even when we didn't have a car Filene's Baseme...</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "   classlabel                                            content\n",
       "0           1  Contrary to other reviews, I have zero complai...\n",
       "1           0  Last summer I had an appointment to get new ti...\n",
       "2           1  Friendly staff, same starbucks fair you get an...\n",
       "3           0  The food is good. Unfortunately the service is...\n",
       "4           1  Even when we didn't have a car Filene's Baseme..."
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "df = pd.read_csv('yelp_review_polarity_csv/test.csv', header=None, index_col=None)\n",
    "df.columns = ['classlabel', 'content']\n",
    "df['classlabel'] = df['classlabel']-1\n",
    "df.head()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([0, 1])"
      ]
     },
     "execution_count": 9,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "np.unique(df['classlabel'].values)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([19000, 19000])"
      ]
     },
     "execution_count": 10,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "np.bincount(df['classlabel'])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [],
   "source": [
    "df[['classlabel', 'content']].to_csv('yelp_review_polarity_csv/test_prepocessed.csv', index=None)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [],
   "source": [
    "del df"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "4GnH64XvsV8n"
   },
   "source": [
    "Define the Label and Text field formatters:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [],
   "source": [
    "TEXT = data.Field(sequential=True,\n",
    "                  tokenize='spacy',\n",
    "                  include_lengths=True) # necessary for packed_padded_sequence\n",
    "\n",
    "LABEL = data.LabelField(dtype=torch.float)\n",
    "\n",
    "\n",
    "# If you get an error [E050] Can't find model 'en'\n",
    "# you need to run the following on your command line:\n",
    "#  python -m spacy download en"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Process the dataset:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [],
   "source": [
    "fields = [('classlabel', LABEL), ('content', TEXT)]\n",
    "\n",
    "train_dataset = data.TabularDataset(\n",
    "    path=\"yelp_review_polarity_csv/train_prepocessed.csv\", format='csv',\n",
    "    skip_header=True, fields=fields)\n",
    "\n",
    "test_dataset = data.TabularDataset(\n",
    "    path=\"yelp_review_polarity_csv/test_prepocessed.csv\", format='csv',\n",
    "    skip_header=True, fields=fields)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Split the training dataset into training and validation:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 68
    },
    "colab_type": "code",
    "id": "WZ_4jiHVnMxN",
    "outputId": "dfa51c04-4845-44c3-f50b-d36d41f132b8"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Num Train: 532000\n",
      "Num Valid: 28000\n"
     ]
    }
   ],
   "source": [
    "train_data, valid_data = train_dataset.split(\n",
    "    split_ratio=[0.95, 0.05],\n",
    "    random_state=random.seed(RANDOM_SEED))\n",
    "\n",
    "print(f'Num Train: {len(train_data)}')\n",
    "print(f'Num Valid: {len(valid_data)}')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "L-TBwKWPslPa"
   },
   "source": [
    "Build the vocabulary based on the top \"VOCABULARY_SIZE\" words:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 51
    },
    "colab_type": "code",
    "id": "e8uNrjdtn4A8",
    "outputId": "6cf499d7-7722-4da0-8576-ee0f218cc6e3"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Vocabulary size: 5002\n",
      "Number of classes: 2\n"
     ]
    }
   ],
   "source": [
    "TEXT.build_vocab(train_data,\n",
    "                 max_size=VOCABULARY_SIZE,\n",
    "                 vectors='glove.6B.100d',\n",
    "                 unk_init=torch.Tensor.normal_)\n",
    "\n",
    "LABEL.build_vocab(train_data)\n",
    "\n",
    "print(f'Vocabulary size: {len(TEXT.vocab)}')\n",
    "print(f'Number of classes: {len(LABEL.vocab)}')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['1', '0']"
      ]
     },
     "execution_count": 17,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "list(LABEL.vocab.freqs)[-10:]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "JpEMNInXtZsb"
   },
   "source": [
    "The TEXT.vocab dictionary will contain the word counts and indices. The reason why the number of words is VOCABULARY_SIZE + 2 is that it contains to special tokens for padding and unknown words: `<unk>` and `<pad>`."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "eIQ_zfKLwjKm"
   },
   "source": [
    "Make dataset iterators:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "i7JiHR1stHNF"
   },
   "outputs": [],
   "source": [
    "train_loader, valid_loader, test_loader = data.BucketIterator.splits(\n",
    "    (train_data, valid_data, test_dataset), \n",
    "    batch_size=BATCH_SIZE,\n",
    "    sort_within_batch=True, # necessary for packed_padded_sequence\n",
    "    sort_key=lambda x: len(x.content),\n",
    "    device=DEVICE)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "R0pT_dMRvicQ"
   },
   "source": [
    "Testing the iterators (note that the number of rows depends on the longest document in the respective batch):"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 204
    },
    "colab_type": "code",
    "id": "y8SP_FccutT0",
    "outputId": "fe33763a-4560-4dee-adee-31cc6c48b0b2"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Train\n",
      "Text matrix size: torch.Size([113, 128])\n",
      "Target vector size: torch.Size([128])\n",
      "\n",
      "Valid:\n",
      "Text matrix size: torch.Size([6, 128])\n",
      "Target vector size: torch.Size([128])\n",
      "\n",
      "Test:\n",
      "Text matrix size: torch.Size([5, 128])\n",
      "Target vector size: torch.Size([128])\n"
     ]
    }
   ],
   "source": [
    "print('Train')\n",
    "for batch in train_loader:\n",
    "    print(f'Text matrix size: {batch.content[0].size()}')\n",
    "    print(f'Target vector size: {batch.classlabel.size()}')\n",
    "    break\n",
    "    \n",
    "print('\\nValid:')\n",
    "for batch in valid_loader:\n",
    "    print(f'Text matrix size: {batch.content[0].size()}')\n",
    "    print(f'Target vector size: {batch.classlabel.size()}')\n",
    "    break\n",
    "    \n",
    "print('\\nTest:')\n",
    "for batch in test_loader:\n",
    "    print(f'Text matrix size: {batch.content[0].size()}')\n",
    "    print(f'Target vector size: {batch.classlabel.size()}')\n",
    "    break"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "G_grdW3pxCzz"
   },
   "source": [
    "## Model"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "nQIUm5EjxFNa"
   },
   "outputs": [],
   "source": [
    "import torch.nn as nn\n",
    "\n",
    "\n",
    "class RNN(nn.Module):\n",
    "    def __init__(self, input_dim, embedding_dim, bidirectional, hidden_dim, num_layers, output_dim, dropout, pad_idx):\n",
    "        \n",
    "        super().__init__()\n",
    "        \n",
    "        self.embedding = nn.Embedding(input_dim, embedding_dim, padding_idx=pad_idx)\n",
    "        self.rnn = nn.LSTM(embedding_dim, \n",
    "                           hidden_dim,\n",
    "                           num_layers=num_layers,\n",
    "                           bidirectional=bidirectional, \n",
    "                           dropout=dropout)\n",
    "        self.fc1 = nn.Linear(hidden_dim * num_layers, 64)\n",
    "        self.fc2 = nn.Linear(64, output_dim)\n",
    "        self.dropout = nn.Dropout(dropout)\n",
    "        \n",
    "    def forward(self, text, text_length):\n",
    "\n",
    "        embedded = self.dropout(self.embedding(text))\n",
    "        packed_embedded = nn.utils.rnn.pack_padded_sequence(embedded, text_length)\n",
    "        packed_output, (hidden, cell) = self.rnn(packed_embedded)\n",
    "        output, output_lengths = nn.utils.rnn.pad_packed_sequence(packed_output)\n",
    "        hidden = self.dropout(torch.cat((hidden[-2,:,:], hidden[-1,:,:]), dim=1))\n",
    "        hidden = self.fc1(hidden)\n",
    "        return hidden"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "Ik3NF3faxFmZ"
   },
   "outputs": [],
   "source": [
    "INPUT_DIM = len(TEXT.vocab)\n",
    "\n",
    "PAD_IDX = TEXT.vocab.stoi[TEXT.pad_token]\n",
    "\n",
    "torch.manual_seed(RANDOM_SEED)\n",
    "model = RNN(INPUT_DIM, EMBEDDING_DIM, BIDIRECTIONAL, HIDDEN_DIM, NUM_LAYERS, OUTPUT_DIM, DROPOUT, PAD_IDX)\n",
    "model = model.to(DEVICE)\n",
    "optimizer = torch.optim.Adam(model.parameters(), lr=LEARNING_RATE)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "Lv9Ny9di6VcI"
   },
   "source": [
    "## Training"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "T5t1Afn4xO11"
   },
   "outputs": [],
   "source": [
    "def compute_accuracy(model, data_loader, device):\n",
    "    model.eval()\n",
    "    correct_pred, num_examples = 0, 0\n",
    "    with torch.no_grad():\n",
    "        for batch_idx, batch_data in enumerate(data_loader):\n",
    "            text, text_lengths = batch_data.content\n",
    "            logits = model(text, text_lengths).squeeze(1)\n",
    "            _, predicted_labels = torch.max(logits, 1)\n",
    "            num_examples += batch_data.classlabel.size(0)\n",
    "            correct_pred += (predicted_labels.long() == batch_data.classlabel.long()).sum()\n",
    "        return correct_pred.float()/num_examples * 100"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 1836
    },
    "colab_type": "code",
    "id": "EABZM8Vo0ilB",
    "outputId": "5d45e293-9909-4588-e793-8dfaf72e5c67"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch: 001/050 | Batch 000/4157 | Cost: 4.1925\n",
      "Epoch: 001/050 | Batch 1000/4157 | Cost: 0.3392\n",
      "Epoch: 001/050 | Batch 2000/4157 | Cost: 0.3254\n",
      "Epoch: 001/050 | Batch 3000/4157 | Cost: 0.3263\n",
      "Epoch: 001/050 | Batch 4000/4157 | Cost: 0.1488\n",
      "training accuracy: 94.50%\n",
      "valid accuracy: 94.12%\n",
      "Time elapsed: 8.57 min\n",
      "Epoch: 002/050 | Batch 000/4157 | Cost: 0.2246\n",
      "Epoch: 002/050 | Batch 1000/4157 | Cost: 0.1248\n",
      "Epoch: 002/050 | Batch 2000/4157 | Cost: 0.1107\n",
      "Epoch: 002/050 | Batch 3000/4157 | Cost: 0.1820\n",
      "Epoch: 002/050 | Batch 4000/4157 | Cost: 0.0808\n",
      "training accuracy: 95.75%\n",
      "valid accuracy: 95.35%\n",
      "Time elapsed: 17.23 min\n",
      "Epoch: 003/050 | Batch 000/4157 | Cost: 0.0877\n",
      "Epoch: 003/050 | Batch 1000/4157 | Cost: 0.0720\n",
      "Epoch: 003/050 | Batch 2000/4157 | Cost: 0.0770\n",
      "Epoch: 003/050 | Batch 3000/4157 | Cost: 0.0876\n",
      "Epoch: 003/050 | Batch 4000/4157 | Cost: 0.0851\n",
      "training accuracy: 96.15%\n",
      "valid accuracy: 95.62%\n",
      "Time elapsed: 25.90 min\n",
      "Epoch: 004/050 | Batch 000/4157 | Cost: 0.1596\n",
      "Epoch: 004/050 | Batch 1000/4157 | Cost: 0.1571\n",
      "Epoch: 004/050 | Batch 2000/4157 | Cost: 0.1728\n",
      "Epoch: 004/050 | Batch 3000/4157 | Cost: 0.0911\n",
      "Epoch: 004/050 | Batch 4000/4157 | Cost: 0.1380\n",
      "training accuracy: 96.46%\n",
      "valid accuracy: 95.86%\n",
      "Time elapsed: 34.65 min\n",
      "Epoch: 005/050 | Batch 000/4157 | Cost: 0.2183\n",
      "Epoch: 005/050 | Batch 1000/4157 | Cost: 0.0951\n",
      "Epoch: 005/050 | Batch 2000/4157 | Cost: 0.1052\n",
      "Epoch: 005/050 | Batch 3000/4157 | Cost: 0.0759\n",
      "Epoch: 005/050 | Batch 4000/4157 | Cost: 0.0705\n",
      "training accuracy: 96.57%\n",
      "valid accuracy: 95.69%\n",
      "Time elapsed: 43.71 min\n",
      "Epoch: 006/050 | Batch 000/4157 | Cost: 0.1320\n",
      "Epoch: 006/050 | Batch 1000/4157 | Cost: 0.0989\n",
      "Epoch: 006/050 | Batch 2000/4157 | Cost: 0.1763\n",
      "Epoch: 006/050 | Batch 3000/4157 | Cost: 0.1935\n",
      "Epoch: 006/050 | Batch 4000/4157 | Cost: 0.1201\n",
      "training accuracy: 96.80%\n",
      "valid accuracy: 95.91%\n",
      "Time elapsed: 52.72 min\n",
      "Epoch: 007/050 | Batch 000/4157 | Cost: 0.1282\n",
      "Epoch: 007/050 | Batch 1000/4157 | Cost: 0.0945\n",
      "Epoch: 007/050 | Batch 2000/4157 | Cost: 0.1035\n",
      "Epoch: 007/050 | Batch 3000/4157 | Cost: 0.0490\n",
      "Epoch: 007/050 | Batch 4000/4157 | Cost: 0.1134\n",
      "training accuracy: 96.97%\n",
      "valid accuracy: 96.08%\n",
      "Time elapsed: 61.70 min\n",
      "Epoch: 008/050 | Batch 000/4157 | Cost: 0.0646\n",
      "Epoch: 008/050 | Batch 1000/4157 | Cost: 0.0576\n",
      "Epoch: 008/050 | Batch 2000/4157 | Cost: 0.0668\n",
      "Epoch: 008/050 | Batch 3000/4157 | Cost: 0.1527\n",
      "Epoch: 008/050 | Batch 4000/4157 | Cost: 0.0996\n",
      "training accuracy: 97.05%\n",
      "valid accuracy: 96.15%\n",
      "Time elapsed: 70.65 min\n",
      "Epoch: 009/050 | Batch 000/4157 | Cost: 0.1095\n",
      "Epoch: 009/050 | Batch 1000/4157 | Cost: 0.1356\n",
      "Epoch: 009/050 | Batch 2000/4157 | Cost: 0.0523\n",
      "Epoch: 009/050 | Batch 3000/4157 | Cost: 0.0761\n",
      "Epoch: 009/050 | Batch 4000/4157 | Cost: 0.0700\n",
      "training accuracy: 97.11%\n",
      "valid accuracy: 96.09%\n",
      "Time elapsed: 79.68 min\n",
      "Epoch: 010/050 | Batch 000/4157 | Cost: 0.0975\n",
      "Epoch: 010/050 | Batch 1000/4157 | Cost: 0.1032\n",
      "Epoch: 010/050 | Batch 2000/4157 | Cost: 0.1357\n",
      "Epoch: 010/050 | Batch 3000/4157 | Cost: 0.0950\n",
      "Epoch: 010/050 | Batch 4000/4157 | Cost: 0.1263\n",
      "training accuracy: 97.11%\n",
      "valid accuracy: 96.06%\n",
      "Time elapsed: 88.72 min\n",
      "Epoch: 011/050 | Batch 000/4157 | Cost: 0.0440\n",
      "Epoch: 011/050 | Batch 1000/4157 | Cost: 0.0980\n",
      "Epoch: 011/050 | Batch 2000/4157 | Cost: 0.0603\n",
      "Epoch: 011/050 | Batch 3000/4157 | Cost: 0.0524\n",
      "Epoch: 011/050 | Batch 4000/4157 | Cost: 0.0840\n",
      "training accuracy: 97.29%\n",
      "valid accuracy: 96.21%\n",
      "Time elapsed: 97.68 min\n",
      "Epoch: 012/050 | Batch 000/4157 | Cost: 0.1569\n",
      "Epoch: 012/050 | Batch 1000/4157 | Cost: 0.0744\n",
      "Epoch: 012/050 | Batch 2000/4157 | Cost: 0.1388\n",
      "Epoch: 012/050 | Batch 3000/4157 | Cost: 0.0720\n",
      "Epoch: 012/050 | Batch 4000/4157 | Cost: 0.0588\n",
      "training accuracy: 97.24%\n",
      "valid accuracy: 96.15%\n",
      "Time elapsed: 106.73 min\n",
      "Epoch: 013/050 | Batch 000/4157 | Cost: 0.0353\n",
      "Epoch: 013/050 | Batch 1000/4157 | Cost: 0.1184\n",
      "Epoch: 013/050 | Batch 2000/4157 | Cost: 0.0866\n",
      "Epoch: 013/050 | Batch 3000/4157 | Cost: 0.0525\n",
      "Epoch: 013/050 | Batch 4000/4157 | Cost: 0.0722\n",
      "training accuracy: 97.13%\n",
      "valid accuracy: 95.86%\n",
      "Time elapsed: 115.74 min\n",
      "Epoch: 014/050 | Batch 000/4157 | Cost: 0.0898\n",
      "Epoch: 014/050 | Batch 1000/4157 | Cost: 0.0936\n",
      "Epoch: 014/050 | Batch 2000/4157 | Cost: 0.0786\n",
      "Epoch: 014/050 | Batch 3000/4157 | Cost: 0.0615\n",
      "Epoch: 014/050 | Batch 4000/4157 | Cost: 0.1044\n",
      "training accuracy: 97.33%\n",
      "valid accuracy: 96.11%\n",
      "Time elapsed: 124.77 min\n",
      "Epoch: 015/050 | Batch 000/4157 | Cost: 0.1224\n",
      "Epoch: 015/050 | Batch 1000/4157 | Cost: 0.0771\n",
      "Epoch: 015/050 | Batch 2000/4157 | Cost: 0.1181\n",
      "Epoch: 015/050 | Batch 3000/4157 | Cost: 0.0447\n",
      "Epoch: 015/050 | Batch 4000/4157 | Cost: 0.0996\n",
      "training accuracy: 97.39%\n",
      "valid accuracy: 96.10%\n",
      "Time elapsed: 133.71 min\n",
      "Epoch: 016/050 | Batch 000/4157 | Cost: 0.0977\n",
      "Epoch: 016/050 | Batch 1000/4157 | Cost: 0.1531\n",
      "Epoch: 016/050 | Batch 2000/4157 | Cost: 0.0744\n",
      "Epoch: 016/050 | Batch 3000/4157 | Cost: 0.0793\n",
      "Epoch: 016/050 | Batch 4000/4157 | Cost: 0.0540\n",
      "training accuracy: 97.54%\n",
      "valid accuracy: 96.31%\n",
      "Time elapsed: 142.78 min\n",
      "Epoch: 017/050 | Batch 000/4157 | Cost: 0.1054\n",
      "Epoch: 017/050 | Batch 1000/4157 | Cost: 0.0698\n",
      "Epoch: 017/050 | Batch 2000/4157 | Cost: 0.0439\n",
      "Epoch: 017/050 | Batch 3000/4157 | Cost: 0.0602\n",
      "Epoch: 017/050 | Batch 4000/4157 | Cost: 0.0843\n",
      "training accuracy: 97.41%\n",
      "valid accuracy: 96.08%\n",
      "Time elapsed: 151.83 min\n",
      "Epoch: 018/050 | Batch 000/4157 | Cost: 0.1025\n",
      "Epoch: 018/050 | Batch 1000/4157 | Cost: 0.1091\n",
      "Epoch: 018/050 | Batch 2000/4157 | Cost: 0.0359\n",
      "Epoch: 018/050 | Batch 3000/4157 | Cost: 0.0509\n",
      "Epoch: 018/050 | Batch 4000/4157 | Cost: 0.0674\n",
      "training accuracy: 97.50%\n",
      "valid accuracy: 96.15%\n",
      "Time elapsed: 160.86 min\n",
      "Epoch: 019/050 | Batch 000/4157 | Cost: 0.0795\n",
      "Epoch: 019/050 | Batch 1000/4157 | Cost: 0.0561\n",
      "Epoch: 019/050 | Batch 2000/4157 | Cost: 0.0533\n",
      "Epoch: 019/050 | Batch 3000/4157 | Cost: 0.0801\n",
      "Epoch: 019/050 | Batch 4000/4157 | Cost: 0.1394\n",
      "training accuracy: 97.60%\n",
      "valid accuracy: 96.19%\n",
      "Time elapsed: 169.83 min\n",
      "Epoch: 020/050 | Batch 000/4157 | Cost: 0.0896\n",
      "Epoch: 020/050 | Batch 1000/4157 | Cost: 0.1357\n",
      "Epoch: 020/050 | Batch 2000/4157 | Cost: 0.0574\n",
      "Epoch: 020/050 | Batch 3000/4157 | Cost: 0.0695\n",
      "Epoch: 020/050 | Batch 4000/4157 | Cost: 0.0781\n",
      "training accuracy: 97.56%\n",
      "valid accuracy: 96.16%\n",
      "Time elapsed: 178.88 min\n",
      "Epoch: 021/050 | Batch 000/4157 | Cost: 0.1040\n",
      "Epoch: 021/050 | Batch 1000/4157 | Cost: 0.0993\n",
      "Epoch: 021/050 | Batch 2000/4157 | Cost: 0.0427\n",
      "Epoch: 021/050 | Batch 3000/4157 | Cost: 0.1151\n",
      "Epoch: 021/050 | Batch 4000/4157 | Cost: 0.0666\n",
      "training accuracy: 97.60%\n",
      "valid accuracy: 96.14%\n",
      "Time elapsed: 187.91 min\n",
      "Epoch: 022/050 | Batch 000/4157 | Cost: 0.0760\n",
      "Epoch: 022/050 | Batch 1000/4157 | Cost: 0.0557\n",
      "Epoch: 022/050 | Batch 2000/4157 | Cost: 0.0538\n",
      "Epoch: 022/050 | Batch 3000/4157 | Cost: 0.0619\n",
      "Epoch: 022/050 | Batch 4000/4157 | Cost: 0.0884\n",
      "training accuracy: 97.55%\n",
      "valid accuracy: 96.16%\n",
      "Time elapsed: 196.92 min\n",
      "Epoch: 023/050 | Batch 000/4157 | Cost: 0.0938\n",
      "Epoch: 023/050 | Batch 1000/4157 | Cost: 0.0543\n",
      "Epoch: 023/050 | Batch 2000/4157 | Cost: 0.0295\n",
      "Epoch: 023/050 | Batch 3000/4157 | Cost: 0.1257\n",
      "Epoch: 023/050 | Batch 4000/4157 | Cost: 0.0690\n",
      "training accuracy: 97.54%\n",
      "valid accuracy: 96.19%\n",
      "Time elapsed: 205.98 min\n",
      "Epoch: 024/050 | Batch 000/4157 | Cost: 0.0709\n",
      "Epoch: 024/050 | Batch 1000/4157 | Cost: 0.0676\n",
      "Epoch: 024/050 | Batch 2000/4157 | Cost: 0.1822\n",
      "Epoch: 024/050 | Batch 3000/4157 | Cost: 0.0687\n",
      "Epoch: 024/050 | Batch 4000/4157 | Cost: 0.0737\n",
      "training accuracy: 97.68%\n",
      "valid accuracy: 96.28%\n",
      "Time elapsed: 215.04 min\n",
      "Epoch: 025/050 | Batch 000/4157 | Cost: 0.0740\n",
      "Epoch: 025/050 | Batch 1000/4157 | Cost: 0.0932\n",
      "Epoch: 025/050 | Batch 2000/4157 | Cost: 0.1179\n",
      "Epoch: 025/050 | Batch 3000/4157 | Cost: 0.0735\n",
      "Epoch: 025/050 | Batch 4000/4157 | Cost: 0.1019\n",
      "training accuracy: 97.68%\n",
      "valid accuracy: 96.25%\n",
      "Time elapsed: 224.07 min\n",
      "Epoch: 026/050 | Batch 000/4157 | Cost: 0.0893\n",
      "Epoch: 026/050 | Batch 1000/4157 | Cost: 0.0890\n",
      "Epoch: 026/050 | Batch 2000/4157 | Cost: 0.0736\n",
      "Epoch: 026/050 | Batch 3000/4157 | Cost: 0.0675\n",
      "Epoch: 026/050 | Batch 4000/4157 | Cost: 0.0344\n",
      "training accuracy: 97.62%\n",
      "valid accuracy: 96.23%\n",
      "Time elapsed: 233.00 min\n",
      "Epoch: 027/050 | Batch 000/4157 | Cost: 0.0331\n",
      "Epoch: 027/050 | Batch 1000/4157 | Cost: 0.1079\n",
      "Epoch: 027/050 | Batch 2000/4157 | Cost: 0.0800\n",
      "Epoch: 027/050 | Batch 3000/4157 | Cost: 0.0703\n",
      "Epoch: 027/050 | Batch 4000/4157 | Cost: 0.0759\n",
      "training accuracy: 97.62%\n",
      "valid accuracy: 96.11%\n",
      "Time elapsed: 242.07 min\n",
      "Epoch: 028/050 | Batch 000/4157 | Cost: 0.1071\n",
      "Epoch: 028/050 | Batch 1000/4157 | Cost: 0.0826\n",
      "Epoch: 028/050 | Batch 2000/4157 | Cost: 0.0699\n",
      "Epoch: 028/050 | Batch 3000/4157 | Cost: 0.0783\n",
      "Epoch: 028/050 | Batch 4000/4157 | Cost: 0.0550\n",
      "training accuracy: 97.55%\n",
      "valid accuracy: 96.09%\n",
      "Time elapsed: 251.10 min\n",
      "Epoch: 029/050 | Batch 000/4157 | Cost: 0.0291\n",
      "Epoch: 029/050 | Batch 1000/4157 | Cost: 0.0881\n",
      "Epoch: 029/050 | Batch 2000/4157 | Cost: 0.0537\n",
      "Epoch: 029/050 | Batch 3000/4157 | Cost: 0.1502\n",
      "Epoch: 029/050 | Batch 4000/4157 | Cost: 0.0614\n",
      "training accuracy: 97.68%\n",
      "valid accuracy: 96.20%\n",
      "Time elapsed: 260.10 min\n",
      "Epoch: 030/050 | Batch 000/4157 | Cost: 0.0922\n",
      "Epoch: 030/050 | Batch 1000/4157 | Cost: 0.1103\n",
      "Epoch: 030/050 | Batch 2000/4157 | Cost: 0.0814\n",
      "Epoch: 030/050 | Batch 3000/4157 | Cost: 0.0506\n",
      "Epoch: 030/050 | Batch 4000/4157 | Cost: 0.1734\n",
      "training accuracy: 97.69%\n",
      "valid accuracy: 96.13%\n",
      "Time elapsed: 269.04 min\n",
      "Epoch: 031/050 | Batch 000/4157 | Cost: 0.1000\n",
      "Epoch: 031/050 | Batch 1000/4157 | Cost: 0.0227\n",
      "Epoch: 031/050 | Batch 2000/4157 | Cost: 0.1718\n",
      "Epoch: 031/050 | Batch 3000/4157 | Cost: 0.0873\n",
      "Epoch: 031/050 | Batch 4000/4157 | Cost: 0.0753\n",
      "training accuracy: 97.67%\n",
      "valid accuracy: 96.17%\n",
      "Time elapsed: 278.07 min\n",
      "Epoch: 032/050 | Batch 000/4157 | Cost: 0.0953\n",
      "Epoch: 032/050 | Batch 1000/4157 | Cost: 0.0244\n",
      "Epoch: 032/050 | Batch 2000/4157 | Cost: 0.0515\n",
      "Epoch: 032/050 | Batch 3000/4157 | Cost: 0.0968\n",
      "Epoch: 032/050 | Batch 4000/4157 | Cost: 0.0896\n",
      "training accuracy: 97.67%\n",
      "valid accuracy: 96.23%\n",
      "Time elapsed: 287.10 min\n",
      "Epoch: 033/050 | Batch 000/4157 | Cost: 0.0858\n",
      "Epoch: 033/050 | Batch 1000/4157 | Cost: 0.0686\n",
      "Epoch: 033/050 | Batch 2000/4157 | Cost: 0.0543\n",
      "Epoch: 033/050 | Batch 3000/4157 | Cost: 0.0806\n",
      "Epoch: 033/050 | Batch 4000/4157 | Cost: 0.0895\n",
      "training accuracy: 97.66%\n",
      "valid accuracy: 96.15%\n",
      "Time elapsed: 296.08 min\n",
      "Epoch: 034/050 | Batch 000/4157 | Cost: 0.0978\n",
      "Epoch: 034/050 | Batch 1000/4157 | Cost: 0.1026\n",
      "Epoch: 034/050 | Batch 2000/4157 | Cost: 0.0278\n",
      "Epoch: 034/050 | Batch 3000/4157 | Cost: 0.0548\n",
      "Epoch: 034/050 | Batch 4000/4157 | Cost: 0.1300\n",
      "training accuracy: 97.66%\n",
      "valid accuracy: 96.11%\n",
      "Time elapsed: 305.03 min\n",
      "Epoch: 035/050 | Batch 000/4157 | Cost: 0.0991\n",
      "Epoch: 035/050 | Batch 1000/4157 | Cost: 0.0469\n",
      "Epoch: 035/050 | Batch 2000/4157 | Cost: 0.0113\n",
      "Epoch: 035/050 | Batch 3000/4157 | Cost: 0.0996\n",
      "Epoch: 035/050 | Batch 4000/4157 | Cost: 0.1408\n",
      "training accuracy: 97.69%\n",
      "valid accuracy: 96.24%\n",
      "Time elapsed: 314.03 min\n",
      "Epoch: 036/050 | Batch 000/4157 | Cost: 0.0788\n",
      "Epoch: 036/050 | Batch 1000/4157 | Cost: 0.0489\n",
      "Epoch: 036/050 | Batch 2000/4157 | Cost: 0.1000\n",
      "Epoch: 036/050 | Batch 3000/4157 | Cost: 0.0713\n",
      "Epoch: 036/050 | Batch 4000/4157 | Cost: 0.0700\n",
      "training accuracy: 97.70%\n",
      "valid accuracy: 96.24%\n",
      "Time elapsed: 323.07 min\n",
      "Epoch: 037/050 | Batch 000/4157 | Cost: 0.0530\n",
      "Epoch: 037/050 | Batch 1000/4157 | Cost: 0.1012\n",
      "Epoch: 037/050 | Batch 2000/4157 | Cost: 0.0592\n",
      "Epoch: 037/050 | Batch 3000/4157 | Cost: 0.1032\n",
      "Epoch: 037/050 | Batch 4000/4157 | Cost: 0.0435\n",
      "training accuracy: 97.64%\n",
      "valid accuracy: 96.25%\n",
      "Time elapsed: 332.01 min\n",
      "Epoch: 038/050 | Batch 000/4157 | Cost: 0.0605\n",
      "Epoch: 038/050 | Batch 1000/4157 | Cost: 0.1039\n",
      "Epoch: 038/050 | Batch 2000/4157 | Cost: 0.0889\n",
      "Epoch: 038/050 | Batch 3000/4157 | Cost: 0.0954\n",
      "Epoch: 038/050 | Batch 4000/4157 | Cost: 0.0890\n",
      "training accuracy: 97.69%\n",
      "valid accuracy: 96.24%\n",
      "Time elapsed: 341.00 min\n",
      "Epoch: 039/050 | Batch 000/4157 | Cost: 0.0313\n",
      "Epoch: 039/050 | Batch 1000/4157 | Cost: 0.1955\n",
      "Epoch: 039/050 | Batch 2000/4157 | Cost: 0.1388\n",
      "Epoch: 039/050 | Batch 3000/4157 | Cost: 0.0850\n",
      "Epoch: 039/050 | Batch 4000/4157 | Cost: 0.0574\n",
      "training accuracy: 97.71%\n",
      "valid accuracy: 96.23%\n",
      "Time elapsed: 350.03 min\n",
      "Epoch: 040/050 | Batch 000/4157 | Cost: 0.0289\n",
      "Epoch: 040/050 | Batch 1000/4157 | Cost: 0.0602\n",
      "Epoch: 040/050 | Batch 2000/4157 | Cost: 0.0735\n",
      "Epoch: 040/050 | Batch 3000/4157 | Cost: 0.0592\n",
      "Epoch: 040/050 | Batch 4000/4157 | Cost: 0.0692\n",
      "training accuracy: 97.62%\n",
      "valid accuracy: 96.17%\n",
      "Time elapsed: 359.08 min\n",
      "Epoch: 041/050 | Batch 000/4157 | Cost: 0.0815\n",
      "Epoch: 041/050 | Batch 1000/4157 | Cost: 0.0868\n",
      "Epoch: 041/050 | Batch 2000/4157 | Cost: 0.0714\n",
      "Epoch: 041/050 | Batch 3000/4157 | Cost: 0.1631\n",
      "Epoch: 041/050 | Batch 4000/4157 | Cost: 0.0758\n",
      "training accuracy: 97.72%\n",
      "valid accuracy: 96.29%\n",
      "Time elapsed: 367.99 min\n",
      "Epoch: 042/050 | Batch 000/4157 | Cost: 0.0591\n",
      "Epoch: 042/050 | Batch 1000/4157 | Cost: 0.0564\n",
      "Epoch: 042/050 | Batch 2000/4157 | Cost: 0.0635\n",
      "Epoch: 042/050 | Batch 3000/4157 | Cost: 0.1051\n",
      "Epoch: 042/050 | Batch 4000/4157 | Cost: 0.0734\n",
      "training accuracy: 97.64%\n",
      "valid accuracy: 96.14%\n",
      "Time elapsed: 377.04 min\n",
      "Epoch: 043/050 | Batch 000/4157 | Cost: 0.0693\n",
      "Epoch: 043/050 | Batch 1000/4157 | Cost: 0.0590\n",
      "Epoch: 043/050 | Batch 2000/4157 | Cost: 0.0638\n",
      "Epoch: 043/050 | Batch 3000/4157 | Cost: 0.0658\n",
      "Epoch: 043/050 | Batch 4000/4157 | Cost: 0.0599\n",
      "training accuracy: 97.76%\n",
      "valid accuracy: 96.38%\n",
      "Time elapsed: 386.09 min\n",
      "Epoch: 044/050 | Batch 000/4157 | Cost: 0.0503\n",
      "Epoch: 044/050 | Batch 1000/4157 | Cost: 0.1081\n",
      "Epoch: 044/050 | Batch 2000/4157 | Cost: 0.0783\n",
      "Epoch: 044/050 | Batch 3000/4157 | Cost: 0.0634\n",
      "Epoch: 044/050 | Batch 4000/4157 | Cost: 0.1016\n",
      "training accuracy: 97.62%\n",
      "valid accuracy: 96.20%\n",
      "Time elapsed: 395.10 min\n",
      "Epoch: 045/050 | Batch 000/4157 | Cost: 0.0675\n",
      "Epoch: 045/050 | Batch 1000/4157 | Cost: 0.1789\n",
      "Epoch: 045/050 | Batch 2000/4157 | Cost: 0.0497\n",
      "Epoch: 045/050 | Batch 3000/4157 | Cost: 0.0718\n",
      "Epoch: 045/050 | Batch 4000/4157 | Cost: 0.1590\n",
      "training accuracy: 97.68%\n",
      "valid accuracy: 96.25%\n",
      "Time elapsed: 404.06 min\n",
      "Epoch: 046/050 | Batch 000/4157 | Cost: 0.1274\n",
      "Epoch: 046/050 | Batch 1000/4157 | Cost: 0.1153\n",
      "Epoch: 046/050 | Batch 2000/4157 | Cost: 0.1211\n",
      "Epoch: 046/050 | Batch 3000/4157 | Cost: 0.0819\n",
      "Epoch: 046/050 | Batch 4000/4157 | Cost: 0.1036\n",
      "training accuracy: 97.73%\n",
      "valid accuracy: 96.24%\n",
      "Time elapsed: 413.10 min\n",
      "Epoch: 047/050 | Batch 000/4157 | Cost: 0.1166\n",
      "Epoch: 047/050 | Batch 1000/4157 | Cost: 0.0465\n",
      "Epoch: 047/050 | Batch 2000/4157 | Cost: 0.1046\n",
      "Epoch: 047/050 | Batch 3000/4157 | Cost: 0.0449\n",
      "Epoch: 047/050 | Batch 4000/4157 | Cost: 0.1335\n",
      "training accuracy: 97.68%\n",
      "valid accuracy: 96.31%\n",
      "Time elapsed: 422.12 min\n",
      "Epoch: 048/050 | Batch 000/4157 | Cost: 0.0980\n",
      "Epoch: 048/050 | Batch 1000/4157 | Cost: 0.0845\n",
      "Epoch: 048/050 | Batch 2000/4157 | Cost: 0.0559\n",
      "Epoch: 048/050 | Batch 3000/4157 | Cost: 0.0261\n",
      "Epoch: 048/050 | Batch 4000/4157 | Cost: 0.0484\n",
      "training accuracy: 97.69%\n",
      "valid accuracy: 96.28%\n",
      "Time elapsed: 431.10 min\n",
      "Epoch: 049/050 | Batch 000/4157 | Cost: 0.0621\n",
      "Epoch: 049/050 | Batch 1000/4157 | Cost: 0.0815\n",
      "Epoch: 049/050 | Batch 2000/4157 | Cost: 0.0569\n",
      "Epoch: 049/050 | Batch 3000/4157 | Cost: 0.1636\n",
      "Epoch: 049/050 | Batch 4000/4157 | Cost: 0.0797\n",
      "training accuracy: 97.58%\n",
      "valid accuracy: 96.11%\n",
      "Time elapsed: 440.10 min\n",
      "Epoch: 050/050 | Batch 000/4157 | Cost: 0.0517\n",
      "Epoch: 050/050 | Batch 1000/4157 | Cost: 0.0388\n",
      "Epoch: 050/050 | Batch 2000/4157 | Cost: 0.0833\n",
      "Epoch: 050/050 | Batch 3000/4157 | Cost: 0.1234\n",
      "Epoch: 050/050 | Batch 4000/4157 | Cost: 0.0752\n",
      "training accuracy: 97.67%\n",
      "valid accuracy: 96.31%\n",
      "Time elapsed: 449.14 min\n",
      "Total Training Time: 449.14 min\n",
      "Test accuracy: 96.48%\n"
     ]
    }
   ],
   "source": [
    "start_time = time.time()\n",
    "\n",
    "for epoch in range(NUM_EPOCHS):\n",
    "    model.train()\n",
    "    for batch_idx, batch_data in enumerate(train_loader):\n",
    "        \n",
    "        text, text_lengths = batch_data.content\n",
    "        \n",
    "        ### FORWARD AND BACK PROP\n",
    "        logits = model(text, text_lengths).squeeze(1)\n",
    "        cost = F.cross_entropy(logits, batch_data.classlabel.long())\n",
    "        optimizer.zero_grad()\n",
    "        \n",
    "        cost.backward()\n",
    "        \n",
    "        ### UPDATE MODEL PARAMETERS\n",
    "        optimizer.step()\n",
    "        \n",
    "        ### LOGGING\n",
    "        if not batch_idx % 1000:\n",
    "            print (f'Epoch: {epoch+1:03d}/{NUM_EPOCHS:03d} | '\n",
    "                   f'Batch {batch_idx:03d}/{len(train_loader):03d} | '\n",
    "                   f'Cost: {cost:.4f}')\n",
    "\n",
    "    with torch.set_grad_enabled(False):\n",
    "        print(f'training accuracy: '\n",
    "              f'{compute_accuracy(model, train_loader, DEVICE):.2f}%'\n",
    "              f'\\nvalid accuracy: '\n",
    "              f'{compute_accuracy(model, valid_loader, DEVICE):.2f}%')\n",
    "        \n",
    "    print(f'Time elapsed: {(time.time() - start_time)/60:.2f} min')\n",
    "    \n",
    "print(f'Total Training Time: {(time.time() - start_time)/60:.2f} min')\n",
    "print(f'Test accuracy: {compute_accuracy(model, test_loader, DEVICE):.2f}%')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Evaluation"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Evaluating on some new text that has been collected from recent Yelp reviews and are not part of the training or test sets."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "jt55pscgFdKZ"
   },
   "outputs": [],
   "source": [
    "import spacy\n",
    "nlp = spacy.load('en')\n",
    "\n",
    "\n",
    "map_dictionary = {\n",
    "    0: \"negative\",\n",
    "    1: \"positive\"\n",
    "}\n",
    "\n",
    "\n",
    "def predict_class(model, sentence, min_len=4):\n",
    "    # Somewhat based on\n",
    "    # https://github.com/bentrevett/pytorch-sentiment-analysis/\n",
    "    # blob/master/5%20-%20Multi-class%20Sentiment%20Analysis.ipynb\n",
    "    model.eval()\n",
    "    tokenized = [tok.text for tok in nlp.tokenizer(sentence)]\n",
    "    if len(tokenized) < min_len:\n",
    "        tokenized += ['<pad>'] * (min_len - len(tokenized))\n",
    "    indexed = [TEXT.vocab.stoi[t] for t in tokenized]\n",
    "    length = [len(indexed)]\n",
    "    tensor = torch.LongTensor(indexed).to(DEVICE)\n",
    "    tensor = tensor.unsqueeze(1)\n",
    "    length_tensor = torch.LongTensor(length)\n",
    "    preds = model(tensor, length_tensor)\n",
    "    preds = torch.softmax(preds, dim=1)\n",
    "\n",
    "    proba, class_label = preds.max(dim=1)\n",
    "    return proba.item(), class_label.item()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "torch.Size([1, 64])\n",
      "Class Label: 1 -> positive\n",
      "Probability: 0.9960760474205017\n"
     ]
    }
   ],
   "source": [
    "text = \"\"\"\n",
    "I have returned many times since my original review, and I can attest to the fact that, indeed, \n",
    "the plethora of books she provides does not disappoint. Although under new ownership, \n",
    "the vibe and the focus remains unchanged. \n",
    "\n",
    "I still collect Kobayashi poetry anytime I stumble upon it.\n",
    "\n",
    "My absolute favorite bookshop, card vendor, and truth teller. \n",
    "\n",
    "Until next time.\n",
    "\"\"\"\n",
    "\n",
    "proba, pred_label = predict_class(model, text)\n",
    "\n",
    "print(f'Class Label: {pred_label} -> {map_dictionary[pred_label]}')\n",
    "print(f'Probability: {proba}')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "torch.Size([1, 64])\n",
      "Class Label: 0 -> negative\n",
      "Probability: 0.999991774559021\n"
     ]
    }
   ],
   "source": [
    "text = \"\"\"\n",
    "Horrible customer service experience!!\n",
    "\n",
    "Why I even bothered to go here is beyond me.. \n",
    "My wife asked me to get some gift cards and my dad \n",
    "mentioned that he would give me a yearly membership as a present.  \n",
    "I made the mistake of not listening to that little voice in my head \n",
    "screaming \"DON'T!!!!\".  I got the gift cards and asked for the membership \n",
    "and then realized that they hadn't given me the membership.  So I go in the \n",
    "next day and asked someone in customer service if I could get the membership \n",
    "and then have them apply the discount to the previous purchases and some new \n",
    "purchases and their response was \"Of course..  Talk to Scott, our head cashier, \n",
    "and he will gladly take care of this\".  I go to Scott and he tells me \"I've never \n",
    "done that, we would never do that and whoever told you that was obviously \n",
    "wrong\"  Needless to say, I did not make any new purchases and I will promptly \n",
    "return any of the previous purchases and give my hard-earned money to someone who deserves it.\n",
    "\n",
    "Bottom line..  Overpriced lousy customer service is not for me.  In this day\n",
    "and age they should know better than that and you should use your buying power to show them. Stay away..\n",
    "\"\"\"\n",
    "\n",
    "proba, pred_label = predict_class(model, text)\n",
    "\n",
    "print(f'Class Label: {pred_label} -> {map_dictionary[pred_label]}')\n",
    "print(f'Probability: {proba}')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "7lRusB3dF80X"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "pandas    0.24.2\n",
      "torch     1.3.0\n",
      "numpy     1.17.2\n",
      "spacy     2.2.3\n",
      "torchtext 0.4.0\n",
      "\n"
     ]
    }
   ],
   "source": [
    "%watermark -iv"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "metadata": {},
   "outputs": [],
   "source": [
    "torch.save(model.state_dict(), 'rnn_bi_multilayer_lstm_own_csv_yelp-polarity.pt')"
   ]
  }
 ],
 "metadata": {
  "colab": {
   "collapsed_sections": [],
   "name": "rnn_lstm_packed_imdb.ipynb",
   "provenance": [],
   "version": "0.3.2"
  },
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
