{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Question generation with seq2seq model\n",
    "- Generate similar questions using Quora datset\n",
    "- First, train seq2seq model with duplicate question pairs, and then predict another question seeing only one of questions\n",
    "- reference: https://blog.keras.io/a-ten-minute-introduction-to-sequence-to-sequence-learning-in-keras.html\n",
    "- dataset: http://qim.ec.quoracdn.net/quora_duplicate_questions.tsv"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 117,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "import pandas as pd\n",
    "from IPython.display import SVG\n",
    "from keras.utils.vis_utils import model_to_dot\n",
    "from keras.models import Model\n",
    "from keras.layers import *"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Import dataset\n",
    "- Use Quora question pairs dataset\n",
    "- If two questions are similar, 'is_duplicate' is marked 1, 0 if not"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "url = 'http://qim.ec.quoracdn.net/quora_duplicate_questions.tsv'"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style>\n",
       "    .dataframe thead tr:only-child th {\n",
       "        text-align: right;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: left;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>id</th>\n",
       "      <th>qid1</th>\n",
       "      <th>qid2</th>\n",
       "      <th>question1</th>\n",
       "      <th>question2</th>\n",
       "      <th>is_duplicate</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "      <td>2</td>\n",
       "      <td>What is the step by step guide to invest in sh...</td>\n",
       "      <td>What is the step by step guide to invest in sh...</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>1</td>\n",
       "      <td>3</td>\n",
       "      <td>4</td>\n",
       "      <td>What is the story of Kohinoor (Koh-i-Noor) Dia...</td>\n",
       "      <td>What would happen if the Indian government sto...</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>2</td>\n",
       "      <td>5</td>\n",
       "      <td>6</td>\n",
       "      <td>How can I increase the speed of my internet co...</td>\n",
       "      <td>How can Internet speed be increased by hacking...</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>3</td>\n",
       "      <td>7</td>\n",
       "      <td>8</td>\n",
       "      <td>Why am I mentally very lonely? How can I solve...</td>\n",
       "      <td>Find the remainder when [math]23^{24}[/math] i...</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>4</td>\n",
       "      <td>9</td>\n",
       "      <td>10</td>\n",
       "      <td>Which one dissolve in water quikly sugar, salt...</td>\n",
       "      <td>Which fish would survive in salt water?</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "   id  qid1  qid2                                          question1  \\\n",
       "0   0     1     2  What is the step by step guide to invest in sh...   \n",
       "1   1     3     4  What is the story of Kohinoor (Koh-i-Noor) Dia...   \n",
       "2   2     5     6  How can I increase the speed of my internet co...   \n",
       "3   3     7     8  Why am I mentally very lonely? How can I solve...   \n",
       "4   4     9    10  Which one dissolve in water quikly sugar, salt...   \n",
       "\n",
       "                                           question2  is_duplicate  \n",
       "0  What is the step by step guide to invest in sh...             0  \n",
       "1  What would happen if the Indian government sto...             0  \n",
       "2  How can Internet speed be increased by hacking...             0  \n",
       "3  Find the remainder when [math]23^{24}[/math] i...             0  \n",
       "4            Which fish would survive in salt water?             0  "
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# import dataset using read_table() function in pandas\n",
    "data = pd.read_table(url, sep ='\\t')\n",
    "data.head()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style>\n",
       "    .dataframe thead tr:only-child th {\n",
       "        text-align: right;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: left;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>id</th>\n",
       "      <th>qid1</th>\n",
       "      <th>qid2</th>\n",
       "      <th>question1</th>\n",
       "      <th>question2</th>\n",
       "      <th>is_duplicate</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>5</th>\n",
       "      <td>5</td>\n",
       "      <td>11</td>\n",
       "      <td>12</td>\n",
       "      <td>Astrology: I am a Capricorn Sun Cap moon and c...</td>\n",
       "      <td>I'm a triple Capricorn (Sun, Moon and ascendan...</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>7</th>\n",
       "      <td>7</td>\n",
       "      <td>15</td>\n",
       "      <td>16</td>\n",
       "      <td>How can I be a good geologist?</td>\n",
       "      <td>What should I do to be a great geologist?</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>11</th>\n",
       "      <td>11</td>\n",
       "      <td>23</td>\n",
       "      <td>24</td>\n",
       "      <td>How do I read and find my YouTube comments?</td>\n",
       "      <td>How can I see all my Youtube comments?</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>12</th>\n",
       "      <td>12</td>\n",
       "      <td>25</td>\n",
       "      <td>26</td>\n",
       "      <td>What can make Physics easy to learn?</td>\n",
       "      <td>How can you make physics easy to learn?</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>13</th>\n",
       "      <td>13</td>\n",
       "      <td>27</td>\n",
       "      <td>28</td>\n",
       "      <td>What was your first sexual experience like?</td>\n",
       "      <td>What was your first sexual experience?</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "    id  qid1  qid2                                          question1  \\\n",
       "5    5    11    12  Astrology: I am a Capricorn Sun Cap moon and c...   \n",
       "7    7    15    16                     How can I be a good geologist?   \n",
       "11  11    23    24        How do I read and find my YouTube comments?   \n",
       "12  12    25    26               What can make Physics easy to learn?   \n",
       "13  13    27    28        What was your first sexual experience like?   \n",
       "\n",
       "                                            question2  is_duplicate  \n",
       "5   I'm a triple Capricorn (Sun, Moon and ascendan...             1  \n",
       "7           What should I do to be a great geologist?             1  \n",
       "11             How can I see all my Youtube comments?             1  \n",
       "12            How can you make physics easy to learn?             1  \n",
       "13             What was your first sexual experience?             1  "
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# select question pairs that are \"duplicate\"\n",
    "data = data[data['is_duplicate'] == 1]\n",
    "data.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Data processing\n",
    "- Preprocess dataset to fit into seq2seq model"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 81,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# number of samples and minimum count of words to include\n",
    "num_samples = 10000\n",
    "min_count = 5"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 93,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "q1 = list(data['question1'])[:num_samples]\n",
    "q2 = list(data['question2'])[:num_samples]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 77,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# create a list to put in all words (tokens) in input and target dataset\n",
    "input_words = []\n",
    "target_words = []"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 78,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "for i in range(len(q1)):\n",
    "    for token in q1[i].split():\n",
    "        input_words.append(token)\n",
    "    for token in q2[i].split():\n",
    "        target_words.append(token)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 79,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# convert lists into sets to choose only unique tokens\n",
    "unique_input_words = set(input_words)\n",
    "unique_target_words = set(target_words)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Remove tokens that do not occur over 5 times (minimum count = 5)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 82,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Wall time: 32.4 s\n"
     ]
    }
   ],
   "source": [
    "%%time\n",
    "to_delete = []\n",
    "for token in unique_input_words:\n",
    "    if input_words.count(token) < min_count:\n",
    "        to_delete.append(token)\n",
    "\n",
    "for token in to_delete:\n",
    "    unique_input_words.remove(token)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 83,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Wall time: 32.6 s\n"
     ]
    }
   ],
   "source": [
    "%%time\n",
    "to_delete = []\n",
    "for token in unique_target_words:\n",
    "    if target_words.count(token) < min_count:\n",
    "        to_delete.append(token)\n",
    "\n",
    "for token in to_delete:\n",
    "    unique_target_words.remove(token)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "To let model know beginning and end of sentence, inject symbols ('@' and '#') for target sentences\n",
    "- '@': designates beginning of sentence\n",
    "- '#': designates end of sentence"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 94,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "q1 = [q.split() for q in q1]\n",
    "q2 = [('@ ' + q + ' #').split() for q in q2]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 89,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# also add symbols to unique token set\n",
    "unique_target_words.update(['@', '#'])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Parsing sentences (questions)\n",
    "- Split each sentence based on unique tokens"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 98,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "for i in range(len(q1)):\n",
    "    q1[i] = [token for token in q1[i] if token in unique_input_words]\n",
    "    q2[i] = [token for token in q2[i] if token in unique_target_words]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 101,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "['I', 'am', 'a', 'Sun', 'moon', 'and', 'does', 'that', 'say', 'about', 'me?']\n",
      "['@', \"I'm\", 'a', 'and', 'in', 'What', 'does', 'this', 'say', 'about', 'me?', '#']\n",
      "['What', 'would', 'a', 'Trump', 'presidency', 'mean', 'for', 'current', 'international', 'students', 'on', 'an']\n",
      "['@', 'How', 'will', 'a', 'Trump', 'presidency', 'affect', 'the', 'students', 'in', 'US', 'or', 'planning', 'to', 'study', 'in', 'US?', '#']\n"
     ]
    }
   ],
   "source": [
    "print(q1[0])\n",
    "print(q2[0])\n",
    "print(q1[5])\n",
    "print(q2[5])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 102,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "del input_words\n",
    "del target_words"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 103,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "unique_input_words = sorted(list(unique_input_words))\n",
    "unique_target_words = sorted(list(unique_target_words))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 104,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# number of tokens => dimensionality of one-hot encoding space of tokens\n",
    "num_encoder_tokens = len(unique_input_words)\n",
    "num_decoder_tokens = len(unique_target_words)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 105,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# maximum sequence length \n",
    "max_encoder_seq_len = max([len(q) for q in q1])\n",
    "max_decoder_seq_len = max([len(q) for q in q2])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 106,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Total Number of samples:  10000\n",
      "Number of unique input tokens (words):  1967\n",
      "Number of unique output tokens (words):  1997\n",
      "Max seq length for inputs:  35\n",
      "Max seq length for outputs:  45\n"
     ]
    }
   ],
   "source": [
    "print('Total Number of samples: ', len(q1))\n",
    "print('Number of unique input tokens (words): ', num_encoder_tokens)\n",
    "print('Number of unique output tokens (words): ', num_decoder_tokens)\n",
    "print('Max seq length for inputs: ', max_encoder_seq_len)\n",
    "print('Max seq length for outputs: ', max_decoder_seq_len)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 107,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "input_token_idx = dict([(token, i) for i, token in enumerate(unique_input_words)])\n",
    "target_token_idx = dict([(token, i) for i, token in enumerate(unique_target_words)])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Create data for input & target\n",
    "- Note that arrays are initially created with 'zeros' with maximum length\n",
    "- Hence, if sequence is shorter than max length, it is automatically zero padded"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 108,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "encoder_input = np.zeros((len(q1), max_encoder_seq_len, num_encoder_tokens), dtype = 'float32')\n",
    "decoder_input = np.zeros((len(q1), max_decoder_seq_len, num_decoder_tokens), dtype = 'float32')\n",
    "decoder_target = np.zeros((len(q1), max_decoder_seq_len, num_decoder_tokens), dtype = 'float32')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Note that **decoder_target** is identical to **decoder_input** except that decoder_target is \"offset by one timestep\"\n",
    "    - For instance, consider question **\"How can I see all my Youtube comments?\"**\n",
    "    - Then, decoder_input data instance is **\"How / can / I / see / all / my / Youtube / comments?\"**, same as original question\n",
    "    - But, corresponding decoder_target data instance is **\"can / I / see / all / my / Youtube / comments?**\n",
    "    - Like the table below, seq2seq model ssees **Input** and predicts **Target**\n",
    "    \n",
    "| Input |    |Target    |\n",
    "|---------|-------|-----|\n",
    "|How  | ========>|can  |\n",
    "|can    | ========>      | I    |\n",
    "|I    | ========>      | see    |\n",
    "|see    | ========>      | all   |\n",
    "|all    | ========>      | my    |\n",
    "|my    | ========>      | Youtube    |\n",
    "|Youtube    | ========>      | comments?    |\n",
    "|comments?    | ========>      |     |"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 109,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "for i, (x, y) in enumerate(zip(q1, q2)):\n",
    "    for t, token in enumerate(x):\n",
    "        encoder_input[i, t, input_token_idx[token]] = 1.\n",
    "    for t, token in enumerate(y):\n",
    "        decoder_input[i, t, target_token_idx[token]] = 1.\n",
    "        if t > 0:\n",
    "            decoder_target[i, t-1, target_token_idx[token]] = 1."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Create model\n",
    "- Create seq2seq model\n",
    "- seq2seq model here is similar to that of neural machine translation without attention\n",
    "    - Consists of two LSTMs (encoder & decoder)\n",
    "    \n",
    "<br>\n",
    "<img src=\"https://blog.keras.io/img/seq2seq/seq2seq-inference.png\" style=\"width: 500px\"/>\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 113,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "latent_dim = 300"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 114,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "encoder_inputs = Input(shape = (None, num_encoder_tokens))\n",
    "encoder = LSTM(latent_dim, return_state = True)\n",
    "_, state_h, state_c = encoder(encoder_inputs)\n",
    "encoder_states = [state_h, state_c]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 115,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "decoder_inputs = Input(shape = (None, num_decoder_tokens))\n",
    "lstm = LSTM(latent_dim, return_sequences = True, return_state = True)\n",
    "decoder_outputs, _, _ = lstm(decoder_inputs, initial_state = encoder_states)\n",
    "dense = Dense(num_decoder_tokens, activation = 'softmax')\n",
    "decoder_outputs = dense(decoder_outputs)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 116,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "model = Model([encoder_inputs, decoder_inputs], decoder_outputs)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 118,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "image/svg+xml": [
       "<svg height=\"264pt\" viewBox=\"0.00 0.00 277.00 264.00\" width=\"277pt\" xmlns=\"http://www.w3.org/2000/svg\" xmlns:xlink=\"http://www.w3.org/1999/xlink\">\n",
       "<g class=\"graph\" id=\"graph0\" transform=\"scale(1 1) rotate(0) translate(4 260)\">\n",
       "<title>G</title>\n",
       "<polygon fill=\"white\" points=\"-4,4 -4,-260 273,-260 273,4 -4,4\" stroke=\"none\"/>\n",
       "<!-- 3063682290520 -->\n",
       "<g class=\"node\" id=\"node1\"><title>3063682290520</title>\n",
       "<polygon fill=\"none\" points=\"0,-219.5 0,-255.5 133,-255.5 133,-219.5 0,-219.5\" stroke=\"black\"/>\n",
       "<text font-family=\"Times New Roman,serif\" font-size=\"14.00\" text-anchor=\"middle\" x=\"66.5\" y=\"-233.8\">input_1: InputLayer</text>\n",
       "</g>\n",
       "<!-- 3063682290576 -->\n",
       "<g class=\"node\" id=\"node3\"><title>3063682290576</title>\n",
       "<polygon fill=\"none\" points=\"15.5,-146.5 15.5,-182.5 117.5,-182.5 117.5,-146.5 15.5,-146.5\" stroke=\"black\"/>\n",
       "<text font-family=\"Times New Roman,serif\" font-size=\"14.00\" text-anchor=\"middle\" x=\"66.5\" y=\"-160.8\">lstm_1: LSTM</text>\n",
       "</g>\n",
       "<!-- 3063682290520&#45;&gt;3063682290576 -->\n",
       "<g class=\"edge\" id=\"edge1\"><title>3063682290520-&gt;3063682290576</title>\n",
       "<path d=\"M66.5,-219.313C66.5,-211.289 66.5,-201.547 66.5,-192.569\" fill=\"none\" stroke=\"black\"/>\n",
       "<polygon fill=\"black\" points=\"70.0001,-192.529 66.5,-182.529 63.0001,-192.529 70.0001,-192.529\" stroke=\"black\"/>\n",
       "</g>\n",
       "<!-- 3063766925264 -->\n",
       "<g class=\"node\" id=\"node2\"><title>3063766925264</title>\n",
       "<polygon fill=\"none\" points=\"136,-146.5 136,-182.5 269,-182.5 269,-146.5 136,-146.5\" stroke=\"black\"/>\n",
       "<text font-family=\"Times New Roman,serif\" font-size=\"14.00\" text-anchor=\"middle\" x=\"202.5\" y=\"-160.8\">input_2: InputLayer</text>\n",
       "</g>\n",
       "<!-- 3063766924928 -->\n",
       "<g class=\"node\" id=\"node4\"><title>3063766924928</title>\n",
       "<polygon fill=\"none\" points=\"83.5,-73.5 83.5,-109.5 185.5,-109.5 185.5,-73.5 83.5,-73.5\" stroke=\"black\"/>\n",
       "<text font-family=\"Times New Roman,serif\" font-size=\"14.00\" text-anchor=\"middle\" x=\"134.5\" y=\"-87.8\">lstm_2: LSTM</text>\n",
       "</g>\n",
       "<!-- 3063766925264&#45;&gt;3063766924928 -->\n",
       "<g class=\"edge\" id=\"edge2\"><title>3063766925264-&gt;3063766924928</title>\n",
       "<path d=\"M186.039,-146.313C177.603,-137.505 167.183,-126.625 157.925,-116.958\" fill=\"none\" stroke=\"black\"/>\n",
       "<polygon fill=\"black\" points=\"160.254,-114.33 150.809,-109.529 155.198,-119.172 160.254,-114.33\" stroke=\"black\"/>\n",
       "</g>\n",
       "<!-- 3063682290576&#45;&gt;3063766924928 -->\n",
       "<g class=\"edge\" id=\"edge3\"><title>3063682290576-&gt;3063766924928</title>\n",
       "<path d=\"M82.9609,-146.313C91.397,-137.505 101.817,-126.625 111.075,-116.958\" fill=\"none\" stroke=\"black\"/>\n",
       "<polygon fill=\"black\" points=\"113.802,-119.172 118.191,-109.529 108.746,-114.33 113.802,-119.172\" stroke=\"black\"/>\n",
       "</g>\n",
       "<!-- 3063766925832 -->\n",
       "<g class=\"node\" id=\"node5\"><title>3063766925832</title>\n",
       "<polygon fill=\"none\" points=\"81,-0.5 81,-36.5 188,-36.5 188,-0.5 81,-0.5\" stroke=\"black\"/>\n",
       "<text font-family=\"Times New Roman,serif\" font-size=\"14.00\" text-anchor=\"middle\" x=\"134.5\" y=\"-14.8\">dense_1: Dense</text>\n",
       "</g>\n",
       "<!-- 3063766924928&#45;&gt;3063766925832 -->\n",
       "<g class=\"edge\" id=\"edge5\"><title>3063766924928-&gt;3063766925832</title>\n",
       "<path d=\"M134.5,-73.3129C134.5,-65.2895 134.5,-55.5475 134.5,-46.5691\" fill=\"none\" stroke=\"black\"/>\n",
       "<polygon fill=\"black\" points=\"138,-46.5288 134.5,-36.5288 131,-46.5289 138,-46.5288\" stroke=\"black\"/>\n",
       "</g>\n",
       "</g>\n",
       "</svg>"
      ],
      "text/plain": [
       "<IPython.core.display.SVG object>"
      ]
     },
     "execution_count": 118,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# visualizing the model\n",
    "SVG(model_to_dot(model).create(prog='dot', format='svg'))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 119,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "model.compile(optimizer = 'adam', loss = 'categorical_crossentropy')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 141,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Wall time: 3h 6min 3s\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "<keras.callbacks.History at 0x2c9528f9320>"
      ]
     },
     "execution_count": 141,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "%%time\n",
    "model.fit([encoder_input, decoder_input], decoder_target, batch_size = 100, epochs = 100, verbose = 0)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Sampling and Testing\n",
    "- Sample some questions and test their predictions\n",
    "- Separate encoder & decoder model and decode sentences"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 121,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "encoder_model = Model(encoder_inputs, encoder_states)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 122,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "decoder_state_input_h = Input(shape = (latent_dim, ))\n",
    "decoder_state_input_c = Input(shape = (latent_dim, ))\n",
    "decoder_state_inputs = [decoder_state_input_h, decoder_state_input_c]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 124,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "decoder_outputs, state_h, state_c = lstm(decoder_inputs, initial_state = decoder_state_inputs)\n",
    "decoder_states = [state_h, state_c]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 125,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "decoder_outputs = dense(decoder_outputs)\n",
    "decoder_model = Model([decoder_inputs] + decoder_state_inputs, [decoder_outputs] + decoder_states)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 127,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "reverse_input_token_idx = dict((i, token) for token, i in input_token_idx.items())\n",
    "reverse_target_token_idx = dict((i, token) for token, i in target_token_idx.items())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 143,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def decode_sequence(input_seq):\n",
    "    # Encode the input as state vectors.\n",
    "    states_value = encoder_model.predict(input_seq)\n",
    "\n",
    "    # Generate empty target sequence of length 1.\n",
    "    target_seq = np.zeros((1, 1, num_decoder_tokens))\n",
    "    # Populate the first character of target sequence with the start character.\n",
    "    target_seq[0, 0, target_token_idx['@']] = 1.\n",
    "\n",
    "    # Sampling loop for a batch of sequences\n",
    "    # (to simplify, here we assume a batch of size 1).\n",
    "    stop_condition = False\n",
    "    decoded_sentence = ''\n",
    "    while not stop_condition:\n",
    "        output_tokens, h, c = decoder_model.predict(\n",
    "            [target_seq] + states_value)\n",
    "\n",
    "        # Sample a token\n",
    "        sampled_token_index = np.argmax(output_tokens[0, -1, :])\n",
    "        sampled_char = reverse_target_token_idx[sampled_token_index]\n",
    "        decoded_sentence += ' ' + sampled_char\n",
    "\n",
    "        # Exit condition: either hit max length\n",
    "        # or find stop character.\n",
    "        if (sampled_char == '#' or\n",
    "           len(decoded_sentence) > max_decoder_seq_len):\n",
    "            stop_condition = True\n",
    "\n",
    "        # Update the target sequence (of length 1).\n",
    "        target_seq = np.zeros((1, 1, num_decoder_tokens))\n",
    "        target_seq[0, 0, sampled_token_index] = 1.\n",
    "\n",
    "        # Update states\n",
    "        states_value = [h, c]\n",
    "\n",
    "    return decoded_sentence"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Results\n",
    "- You could see that some synthetic questions make sense, even with our naive model"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 144,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "-\n",
      "Input sentence: ['I', 'am', 'a', 'Sun', 'moon', 'and', 'does', 'that', 'say', 'about', 'me?']\n",
      "Decoded sentence:  I'm a and in What does this say about me? #\n",
      "-\n",
      "Input sentence: ['How', 'can', 'I', 'be', 'a', 'good']\n",
      "Decoded sentence:  How do I become a film I live in #\n",
      "-\n",
      "Input sentence: ['How', 'do', 'I', 'read', 'and', 'find', 'my', 'YouTube']\n",
      "Decoded sentence:  How can I change my #\n",
      "-\n",
      "Input sentence: ['What', 'can', 'make', 'easy', 'to', 'learn?']\n",
      "Decoded sentence:  How can you make physics easy to learn? #\n",
      "-\n",
      "Input sentence: ['What', 'was', 'your', 'first', 'sexual', 'experience', 'like?']\n",
      "Decoded sentence:  What was your first sexual experience? #\n",
      "-\n",
      "Input sentence: ['What', 'would', 'a', 'Trump', 'presidency', 'mean', 'for', 'current', 'international', 'students', 'on', 'an']\n",
      "Decoded sentence:  How would a Trump presidency and get if Trump\n",
      "-\n",
      "Input sentence: ['What', 'does', 'mean?']\n",
      "Decoded sentence:  What does #\n",
      "-\n",
      "Input sentence: ['Why', 'are', 'so', 'many', 'Quora', 'users', 'questions', 'that', 'are', 'answered', 'on', 'Google?']\n",
      "Decoded sentence:  Why do people ask Quora questions which can be\n",
      "-\n",
      "Input sentence: ['Why', 'do', 'look']\n",
      "Decoded sentence:  Why are and #\n",
      "-\n",
      "Input sentence: ['How', 'should', 'I', 'prepare', 'for', 'CA', 'final']\n",
      "Decoded sentence:  How do I prepare for KVPY interview? #\n",
      "-\n",
      "Input sentence: ['What', 'are', 'some', 'special', 'for', 'someone', 'with', 'a', 'that', 'gets', 'during', 'the', 'night?']\n",
      "Decoded sentence:  How can I keep my from getting at night? #\n",
      "-\n",
      "Input sentence: ['What', 'Game', 'of', 'Thrones', 'would', 'be', 'the', 'most', 'likely', 'to', 'give', 'you']\n",
      "Decoded sentence:  What Game of Thrones would you like for or what\n",
      "-\n",
      "Input sentence: ['How', 'do', 'we', 'prepare', 'for']\n",
      "Decoded sentence:  How do I prepare for civil #\n",
      "-\n",
      "Input sentence: ['What', 'are', 'some', 'examples', 'of', 'products', 'that', 'can', 'be', 'make', 'from']\n",
      "Decoded sentence:  What are some of the made from #\n",
      "-\n",
      "Input sentence: ['How', 'do', 'I', 'make']\n",
      "Decoded sentence:  How do I make my #\n",
      "-\n",
      "Input sentence: ['Is', 'good', 'for', 'RBI', 'B', 'preparation?']\n",
      "Decoded sentence:  How is career online program for RBI #\n",
      "-\n",
      "Input sentence: ['Will', 'a', 'play', 'on', 'a', 'If', 'so,', 'how?']\n",
      "Decoded sentence:  How can you play a on a #\n",
      "-\n",
      "Input sentence: ['What', 'is', 'the', 'thing', \"you've\", 'ever', 'and', 'why?']\n",
      "Decoded sentence:  What is the most delicious you've ever and why?\n",
      "-\n",
      "Input sentence: ['I', 'was', 'off', 'I', \"can't\", 'remember', 'my', 'Gmail', 'password', 'and', 'just', 'the', 'recovery', 'email', 'is', 'no', 'longer', 'What', 'can', 'I', 'do?']\n",
      "Decoded sentence:  I can't remember my Gmail password or my recovery\n",
      "-\n",
      "Input sentence: ['How', 'is', 'the', 'new', 'Harry', 'Potter', 'book', 'Potter', 'and', 'the']\n",
      "Decoded sentence:  How bad is the new book by #\n",
      "-\n",
      "Input sentence: ['What', 'is', 'Java', 'programming?', 'How', 'To', 'Java', '?']\n",
      "Decoded sentence:  How do I learn a computer language like #\n",
      "-\n",
      "Input sentence: ['What', 'is', 'the', 'best', 'book', 'ever', 'made?']\n",
      "Decoded sentence:  What is the most important book you have ever\n",
      "-\n",
      "Input sentence: ['Can', 'we', 'ever', 'store', 'energy', 'in']\n",
      "Decoded sentence:  Is it possible to store the energy of #\n",
      "-\n",
      "Input sentence: ['What', 'is', 'a', 'personality', 'disorder?']\n",
      "Decoded sentence:  What is #\n",
      "-\n",
      "Input sentence: ['How', 'I', 'can', 'speak', 'English', 'fluently?']\n",
      "Decoded sentence:  How can I learn to speak English fluently? #\n",
      "-\n",
      "Input sentence: ['How', 'helpful', 'is', 'data', 'recovery', 'support', 'phone', 'number', 'to', 'recover', 'your', 'data']\n",
      "Decoded sentence:  What is the customer support phone number USA?\n",
      "-\n",
      "Input sentence: ['Who', 'is', 'the', 'of', 'all', 'time', 'and', 'how', 'can', 'I', 'his', 'level?']\n",
      "Decoded sentence:  Who is the of all time and how can I the level\n",
      "-\n",
      "Input sentence: ['What', 'is', 'purpose', 'of', 'life?']\n",
      "Decoded sentence:  What is the purpose of life according to #\n",
      "-\n",
      "Input sentence: ['What', 'are', 'some', 'of', 'the', 'high', 'salary', 'income', 'jobs', 'in', 'the', 'field', 'of']\n",
      "Decoded sentence:  What are some good online sources to free books?\n",
      "-\n",
      "Input sentence: ['How', 'can', 'I', 'increase', 'my', 'height', 'after', '21']\n",
      "Decoded sentence:  How can I increase my height after #\n",
      "-\n",
      "Input sentence: ['What', 'were', 'the', 'major', 'effects', 'of', 'the', 'cambodia', 'earthquake,', 'and', 'how', 'do', 'these', 'effects', 'compare', 'to', 'the', 'in']\n",
      "Decoded sentence:  What were the major effects of the cambodia earthquake,\n",
      "-\n",
      "Input sentence: ['Which', 'is', 'the', 'best', 'gaming', 'laptop', 'under', 'INR?']\n",
      "Decoded sentence:  Which is the best gaming laptop under Rs #\n",
      "-\n",
      "Input sentence: ['What', 'are', 'some', 'of', 'the', 'best', 'romantic', 'movies', 'in', 'English?']\n",
      "Decoded sentence:  What is the best romantic movie you have ever\n",
      "-\n",
      "Input sentence: ['What', 'causes', 'a']\n",
      "Decoded sentence:  What is the meaning of word #\n",
      "-\n",
      "Input sentence: ['How', 'does', 'work?']\n",
      "Decoded sentence:  How works? #\n",
      "-\n",
      "Input sentence: ['Will', 'there', 'really', 'be', 'any', 'war', 'between', 'India', 'and', 'Pakistan', 'over', 'the', 'Uri', 'What', 'will', 'be', 'its']\n",
      "Decoded sentence:  What are the chances of a nuclear war between\n",
      "-\n",
      "Input sentence: ['Can', 'I', 'recover', 'my', 'email', 'if', 'I', 'forgot', 'the', 'password?']\n",
      "Decoded sentence:  What should I do if I forgot my email password?\n",
      "-\n",
      "Input sentence: [\"What's\", 'the', 'difference', 'between', 'love', 'and']\n",
      "Decoded sentence:  What is the difference between and #\n",
      "-\n",
      "Input sentence: ['What', 'do', 'you', 'think', 'China', 'food?']\n",
      "Decoded sentence:  How do you think of Chinese food? #\n",
      "-\n",
      "Input sentence: ['Why', 'my', 'question', 'was', 'marked', 'as', 'needing']\n",
      "Decoded sentence:  How can I ask a question without getting marked\n",
      "-\n",
      "Input sentence: ['What', 'the', 'highest', 'electrical']\n",
      "Decoded sentence:  What can the greatest electrical #\n",
      "-\n",
      "Input sentence: ['Why', 'does', 'China', 'block', 'at', 'the', 'against', 'the']\n",
      "Decoded sentence:  Why does China support #\n",
      "-\n",
      "Input sentence: ['Can', 'of', 'C', 'cause', 'me', 'to', 'have', 'a']\n",
      "Decoded sentence:  How can C cause a #\n",
      "-\n",
      "Input sentence: ['Who', 'are', 'the']\n",
      "Decoded sentence:  Who are #\n",
      "-\n",
      "Input sentence: ['Does', 'it', 'matter', 'whether', 'humans', 'are', 'or']\n",
      "Decoded sentence:  Does it matter whether is or not? #\n",
      "-\n",
      "Input sentence: ['Does', 'a', 'black', 'hole', 'have']\n",
      "Decoded sentence:  Does the black hole have #\n",
      "-\n",
      "Input sentence: ['Is', 'correct', 'when', 'he', 'says', 'the', 'only', 'way', 'to', 'stop', 'is', 'to', 'stop', 'talking', 'about', 'it?']\n",
      "Decoded sentence:  What are your views about this about #\n",
      "-\n",
      "Input sentence: ['Why', 'does', 'keep', 'the', 'in']\n",
      "Decoded sentence:  Why does it keep blowing on #\n",
      "-\n",
      "Input sentence: ['If', 'I', 'do', 'not', 'YouTube', 'videos', '&', 'upload', 'then', 'are', 'there', 'chances', 'that', 'Google', 'may', 'block', 'my', 'account?']\n",
      "Decoded sentence:  How do you upload movies on YouTube and them?\n",
      "-\n",
      "Input sentence: ['What', 'does', 'the', 'Quora', 'website', 'look', 'like', 'to', 'of', 'Quora']\n",
      "Decoded sentence:  How do I write with a about #\n",
      "-\n",
      "Input sentence: ['Why', 'nobody', 'answer', 'my', 'questions', 'in', 'Quora?']\n",
      "Decoded sentence:  Why is no one answering my questions in Quora?\n",
      "-\n",
      "Input sentence: ['What', 'is', 'the', 'funniest', 'joke', 'you', 'know?']\n",
      "Decoded sentence:  What is the funniest joke of all time? #\n",
      "-\n",
      "Input sentence: ['How', 'do', 'I', 'use', 'as', 'a', 'business']\n",
      "Decoded sentence:  How can I use for business? #\n",
      "-\n",
      "Input sentence: ['Which', 'is', 'the', 'best', 'with', 'deep', 'under']\n",
      "Decoded sentence:  Which is the best under #\n",
      "-\n",
      "Input sentence: ['Which', 'are', 'the', 'best', 'engineering']\n",
      "Decoded sentence:  What is the best field of engineering? #\n",
      "-\n",
      "Input sentence: ['Does', 'anyone', 'see', 'the', 'between', 'and', 'Hindu']\n",
      "Decoded sentence:  What can we between of Hindu #\n",
      "-\n",
      "Input sentence: ['Which', 'is', 'the', 'best', 'income']\n",
      "Decoded sentence:  What is the best income #\n",
      "-\n",
      "Input sentence: ['What', 'is', 'it', 'like', 'to', 'live', 'in']\n",
      "Decoded sentence:  What is it like to live in #\n",
      "-\n",
      "Input sentence: ['Why', 'does', 'some', 'people', 'prefer', 'small']\n",
      "Decoded sentence:  Why do some people prefer to live with small #\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "-\n",
      "Input sentence: ['Do', 'animals']\n",
      "Decoded sentence:  Do animals #\n",
      "-\n",
      "Input sentence: ['How', 'do', 'you', 'get', 'deleted', 'Instagram']\n",
      "Decoded sentence:  How can I recover a hacked Instagram? #\n",
      "-\n",
      "Input sentence: ['What', 'if', 'I', 'two', 'private', 'and', 'them', 'to', 'follow', 'each', 'other?']\n",
      "Decoded sentence:  Does eating help with lose weight? Is it safe\n",
      "-\n",
      "Input sentence: ['What', 'was', 'the', 'significance', 'of', 'the', 'battle', 'of', 'Somme,', 'and', 'how', 'did', 'this', 'battle', 'compare', 'and', 'contrast', 'to', 'the', 'Battle', 'of']\n",
      "Decoded sentence:  What was the significance of the battle of Somme,\n",
      "-\n",
      "Input sentence: ['Is', 'it', 'possible', 'to', 'pursue', 'many', 'different', 'things', 'in', 'life?']\n",
      "Decoded sentence:  How do I to chose between different things to\n",
      "-\n",
      "Input sentence: ['Did', 'Ben', 'more', 'than', 'Christian', 'as']\n",
      "Decoded sentence:  Why do you think you were for a racist? #\n",
      "-\n",
      "Input sentence: ['Which', 'business', 'is', 'good', 'start', 'up', 'in', 'Hyderabad?']\n",
      "Decoded sentence:  Which business is better to start in Hyderabad?\n",
      "-\n",
      "Input sentence: ['How', 'can', 'I', 'stop', 'being', 'so']\n",
      "Decoded sentence:  How do you stop #\n",
      "-\n",
      "Input sentence: ['What', 'is', 'the', 'best', 'of', 'courses', 'I', 'can', 'take', 'up', 'with', 'CA', 'to', 'enhance', 'my', 'career?']\n",
      "Decoded sentence:  What is the best way to reduce belly and fat?\n",
      "-\n",
      "Input sentence: ['How', 'do', 'you', 'take', 'a', 'on', 'a', 'Mac', 'laptop?']\n",
      "Decoded sentence:  How do I take a on my MacBook Pro? What are some\n",
      "-\n",
      "Input sentence: ['What', 'are', 'some', 'must', 'watch', 'TV', 'shows', 'before', 'you', 'die?']\n",
      "Decoded sentence:  Are there any must watch TV #\n",
      "-\n",
      "Input sentence: ['How', 'can', 'I', 'become', 'more', 'fluent', 'in']\n",
      "Decoded sentence:  How can I become fluent in #\n",
      "-\n",
      "Input sentence: ['What', 'are', 'the', 'effects', 'of', 'of', '500', 'and', '1000', 'rupees', 'notes', 'on', 'real', 'estate']\n",
      "Decoded sentence:  What will be the impact of scrapping of ₹500 and\n",
      "-\n",
      "Input sentence: ['What', 'is', 'the', 'easiest', 'way', 'to', 'become', 'a']\n",
      "Decoded sentence:  How can I become a Top Writer on Quora? #\n",
      "-\n",
      "Input sentence: ['Why', 'do', 'people', 'hate', 'Hillary', 'Clinton?']\n",
      "Decoded sentence:  What do think about this people? #\n",
      "-\n",
      "Input sentence: ['Why', 'is', 'important?']\n",
      "Decoded sentence:  Why is #\n",
      "-\n",
      "Input sentence: ['If', 'Hillary', 'Clinton', 'could', 'not', 'continue', 'her', 'Presidential', 'how', 'would', 'the', 'Democratic', 'choose', 'a', 'new']\n",
      "Decoded sentence:  If Hillary Clinton can no as the how would her\n",
      "-\n",
      "Input sentence: ['Is', 'the', 'made', 'up', 'of?']\n",
      "Decoded sentence:  Is the made up of only #\n",
      "-\n",
      "Input sentence: ['I', 'got', 'a', 'Is', 'it', 'enough', 'to', 'get', 'into', 'top', 'universities', 'like']\n",
      "Decoded sentence:  Is a to get into a top school? #\n",
      "-\n",
      "Input sentence: ['How', 'can', 'I', 'learn', 'computer']\n",
      "Decoded sentence:  How can I learn faster? #\n",
      "-\n",
      "Input sentence: ['How', 'do', 'I', 'earn', 'from', 'Quora?']\n",
      "Decoded sentence:  How do I earn from Quora? #\n",
      "-\n",
      "Input sentence: ['What', 'is', 'my', 'code?']\n",
      "Decoded sentence:  What's the for #\n",
      "-\n",
      "Input sentence: ['Will', 'there', 'be', 'another', 'billion']\n",
      "Decoded sentence:  When will the the next billion #\n",
      "-\n",
      "Input sentence: ['How', 'do', 'I', 'create', 'a', 'new', 'in', 'a', 'new', 'using', 'C', 'programming']\n",
      "Decoded sentence:  How do I create a new and new in Linux using C\n",
      "-\n",
      "Input sentence: ['How', 'can', 'we', 'make', 'the', 'world', 'a', 'better', 'place', 'to', 'live', 'in', 'for', 'the', 'future']\n",
      "Decoded sentence:  How do I make my #\n",
      "-\n",
      "Input sentence: ['Why', 'are', 'we', 'about']\n",
      "Decoded sentence:  Why do we care for opinion and about what others\n",
      "-\n",
      "Input sentence: ['How', 'do', 'you', 'train', 'a', '4', 'months']\n",
      "Decoded sentence:  How do I train my #\n",
      "-\n",
      "Input sentence: ['Which', 'online', 'test', 'series', 'is', 'best', 'for', 'GATE', '2017', 'in']\n",
      "Decoded sentence:  Which test series is the best for GATE computer\n",
      "-\n",
      "Input sentence: ['In', 'the', 'play', 'in', 'the', 'why', 'do']\n",
      "Decoded sentence:  In what is the full form of #\n",
      "-\n",
      "Input sentence: ['What', 'are', 'the', 'signs', 'of', 'an', 'person', 'playing']\n",
      "Decoded sentence:  What are signs of smart people their #\n",
      "-\n",
      "Input sentence: ['If', 'dark', 'energy', 'is', 'created', 'with', 'expansion', 'can', 'infinite', 'of', 'it', 'be', 'created?']\n",
      "Decoded sentence:  If energy is not in an expanding is potential\n",
      "-\n",
      "Input sentence: ['Who', 'is']\n",
      "Decoded sentence:  What is #\n",
      "-\n",
      "Input sentence: ['How', 'will', 'the', 'of', 'GST', 'bill', 'impact', 'the', 'of', 'common', 'people?']\n",
      "Decoded sentence:  What exactly is GST bill and how exactly will\n",
      "-\n",
      "Input sentence: ['How', 'can', 'you', 'recover', 'your', 'Gmail', 'password?']\n",
      "Decoded sentence:  How do I recover a Gmail password? #\n",
      "-\n",
      "Input sentence: ['How', 'can', 'a', 'mental', 'can', 'be']\n",
      "Decoded sentence:  How can I and my mental #\n",
      "-\n",
      "Input sentence: ['What', 'are', 'the', 'qualities', 'of', 'a', 'good']\n",
      "Decoded sentence:  What makes a good #\n",
      "-\n",
      "Input sentence: ['Will', 'Modi', 'win', 'in']\n",
      "Decoded sentence:  Can Narendra Modi become Prime Minister of India\n",
      "-\n",
      "Input sentence: ['What', 'exactly', 'is', 'the', 'and', 'what', 'are', 'the', 'pros', 'and']\n",
      "Decoded sentence:  What is the cut possible of table #\n",
      "-\n",
      "Input sentence: ['How', 'do', 'I', 'choose', 'a', 'to', 'publish', 'my', 'paper?']\n",
      "Decoded sentence:  Where do I my #\n",
      "-\n",
      "Input sentence: ['What', 'are', 'your', 'New', \"Year's\", 'resolutions', 'for', '2017?']\n",
      "Decoded sentence:  What are your New Year's #\n",
      "-\n",
      "Input sentence: ['How', 'many', 'months', 'does', 'it', 'take', 'to', 'gain', 'knowledge', 'in', 'developing', 'Android', 'apps', 'from', 'scratch?']\n",
      "Decoded sentence:  How much time does it take to learn Android app\n"
     ]
    }
   ],
   "source": [
    "for idx in range(100):\n",
    "    input_seq = encoder_input[idx: idx+1]\n",
    "    decoded_sent = decode_sequence(input_seq)\n",
    "    print('-')\n",
    "    print('Input sentence:', q1[idx])\n",
    "    print('Decoded sentence:', decoded_sent)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.1"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
