{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "colab": {
      "name": "Attention in NMT.ipynb",
      "provenance": [],
      "collapsed_sections": [],
      "include_colab_link": true
    },
    "kernelspec": {
      "name": "python3",
      "display_name": "Python 3"
    },
    "accelerator": "GPU"
  },
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "view-in-github",
        "colab_type": "text"
      },
      "source": [
        "<a href=\"https://colab.research.google.com/github/zaidalyafeai/AttentioNN/blob/master/Attention_in_NMT.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "0RyS3vmbjZbr",
        "colab_type": "text"
      },
      "source": [
        "Some libs to fix Arabic text not showing correctly"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "FBP8CIYuPHlW",
        "colab_type": "code",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 224
        },
        "outputId": "62a87f83-2e6a-43bd-9fe4-b593595ef2d8"
      },
      "source": [
        "!pip install --upgrade arabic-reshaper\n",
        "!pip install python-bidi"
      ],
      "execution_count": 1,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "Collecting arabic-reshaper\n",
            "  Downloading https://files.pythonhosted.org/packages/1c/b8/9f87dc2fc6c2e087e448db9e7f66ca4d68c22e9d49a95e5aad22d77c74f1/arabic_reshaper-2.0.15-py3-none-any.whl\n",
            "Requirement already satisfied, skipping upgrade: future in /usr/local/lib/python3.6/dist-packages (from arabic-reshaper) (0.16.0)\n",
            "Requirement already satisfied, skipping upgrade: setuptools in /usr/local/lib/python3.6/dist-packages (from arabic-reshaper) (41.2.0)\n",
            "Installing collected packages: arabic-reshaper\n",
            "Successfully installed arabic-reshaper-2.0.15\n",
            "Collecting python-bidi\n",
            "  Downloading https://files.pythonhosted.org/packages/33/b0/f942d146a2f457233baaafd6bdf624eba8e0f665045b4abd69d1b62d097d/python_bidi-0.4.2-py2.py3-none-any.whl\n",
            "Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from python-bidi) (1.12.0)\n",
            "Installing collected packages: python-bidi\n",
            "Successfully installed python-bidi-0.4.2\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "rLAEegPwkINw",
        "colab_type": "text"
      },
      "source": [
        "Get the dataset and unzip it "
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "9xJ2ync1Butl",
        "colab_type": "code",
        "outputId": "451e3a52-597b-4171-8c47-a01da9f15ab5",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 255
        }
      },
      "source": [
        "!wget http://www.manythings.org/anki/ara-eng.zip\n",
        "!unzip ara-eng.zip"
      ],
      "execution_count": 2,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "--2019-09-23 14:44:44--  http://www.manythings.org/anki/ara-eng.zip\n",
            "Resolving www.manythings.org (www.manythings.org)... 104.24.108.196, 104.24.109.196, 2606:4700:30::6818:6dc4, ...\n",
            "Connecting to www.manythings.org (www.manythings.org)|104.24.108.196|:80... connected.\n",
            "HTTP request sent, awaiting response... 200 OK\n",
            "Length: 270779 (264K) [application/zip]\n",
            "Saving to: ‘ara-eng.zip’\n",
            "\n",
            "ara-eng.zip         100%[===================>] 264.43K   424KB/s    in 0.6s    \n",
            "\n",
            "2019-09-23 14:44:46 (424 KB/s) - ‘ara-eng.zip’ saved [270779/270779]\n",
            "\n",
            "Archive:  ara-eng.zip\n",
            "  inflating: _about.txt              \n",
            "  inflating: ara.txt                 \n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "fO_IJ2_Wr4NG",
        "colab_type": "text"
      },
      "source": [
        "## Introduction\n",
        "Traditional nmt models that use [seq2seq](https://arxiv.org/abs/1409.3215) architectures suffer from a bottleneck problem. Only, the final hidden state is used for the deocder to guess the translation. This results in two problems. First, the decoder doesn't utilize the different hidden states that are created by the individual inputs in the encoder. Second, the decoder is not able to guess which parts of the encoder hidden state to focus on at each step of translation. This creates a problem especially when translating long sequences with many words encoded as one hidden state. In order to resolve both problems we use attention-based nmt. These special seq2seq models try to resolve both problems. They resolve the first problem by encoding all the hidden states of the encoder. Second, they force the deocder to focus on certain parts of the encoder output by applying weights to each input hidden state. \n",
        "In this notebook, I explain attention-based seq2seq using Arabic2English parallel dataset."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "y6-QTu27lQbG",
        "colab_type": "text"
      },
      "source": [
        "## Imports"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "G4OCoreFk80h",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "import tensorflow as tf\n",
        "import numpy as np\n",
        "from sklearn.model_selection import train_test_split\n",
        "import matplotlib.pyplot as plt\n",
        "import pandas as pd\n",
        "import seaborn as sns\n",
        "import re\n",
        "import arabic_reshaper\n",
        "from bidi.algorithm import get_display\n",
        "tf.enable_eager_execution()"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "e0GpDe37kO8D",
        "colab_type": "text"
      },
      "source": [
        "## Preprocessing\n",
        "\n",
        "In this section we do some preprocessing. Our main task is to map each word to a unique index. "
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "BfXMenbqkLSi",
        "colab_type": "text"
      },
      "source": [
        "Show first few parallel translations"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "CNFPC8J8iaXd",
        "colab_type": "code",
        "outputId": "3baf6e27-8daf-44fe-f08e-1a88196d6c74",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 102
        }
      },
      "source": [
        "!cat ara.txt | head -5"
      ],
      "execution_count": 2,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "Hi.\tمرحبًا.\n",
            "Run!\tاركض!\n",
            "Help!\tالنجدة!\n",
            "Jump!\tاقفز!\n",
            "Stop!\tقف!\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "a3ym9l1vkiNB",
        "colab_type": "text"
      },
      "source": [
        "Read the dataset from the file"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "R7GMWGi8iek6",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "with open('ara.txt', 'r') as f:\n",
        "  en2ar = f.readlines()"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Zw8C5iwjkpSN",
        "colab_type": "text"
      },
      "source": [
        "clean the dataset from special characters"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "teE5EQ4uiyJ6",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "def preprocess_stmt(stmt):\n",
        "  #remove new line character\n",
        "  stmt = stmt.replace(\"\\n\", \"\")\n",
        "  \n",
        "  #only keep alphanumerics\n",
        "  stmt = re.sub(r'([^\\s\\w]|_)+', \"\", stmt.lower().strip())\n",
        "  \n",
        "  #here we map all aleph character to one character\n",
        "  stmt = re.sub(r'[آأإا]','ا', stmt)\n",
        "  \n",
        "  #attach start, end special symbols \n",
        "  stmt = '<s> '+stmt+' <e>'\n",
        "  \n",
        "  return stmt\n",
        "\n",
        "en = [preprocess_stmt(stmt.split('\\t')[0]) for stmt in en2ar]\n",
        "ar = [preprocess_stmt(stmt.split('\\t')[1]) for stmt in en2ar]"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "nrmSmSr3dUEy",
        "colab_type": "code",
        "outputId": "1f862d53-4d29-4ee6-a9cb-9632f9a87cfe",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 34
        }
      },
      "source": [
        "print(ar[0], '==>', en[0])"
      ],
      "execution_count": 5,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "<s> مرحبا <e> ==> <s> hi <e>\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "r-HM2shaloAC",
        "colab_type": "text"
      },
      "source": [
        "Map each character to an integer using `tf.keras.preprocessing`"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "kHVexp-pkTcv",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "#helper function to find the largest statement in a corpus\n",
        "def get_max_stmt(stmts):\n",
        "  return max([len(stmt) for stmt in stmts])\n",
        "\n",
        "def get_tensors_dicts(stmts):\n",
        "\n",
        "  #tokenzie using spaces and convert to integers \n",
        "  tokenz = tf.keras.preprocessing.text.Tokenizer(split = ' ', filters = \"\")\n",
        "  tokenz.fit_on_texts(stmts)\n",
        "  sequences = tokenz.texts_to_sequences(stmts)\n",
        "\n",
        "  #pad the sequences to have the same length \n",
        "  max_stmt = get_max_stmt(sequences)\n",
        "  output = tf.keras.preprocessing.sequence.pad_sequences(sequences, maxlen = max_stmt, padding = \"post\")\n",
        "  \n",
        "  #create the dictionaries for converting word 2 index and index 2 word \n",
        "  word2index = tokenz.word_index\n",
        "  word2index['<p>'] = 0\n",
        "  index2word = {word2index[k]:k for k in word2index.keys()}\n",
        "  \n",
        "  \n",
        "  return output, word2index, index2word"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "L08abKY_mI5L",
        "colab_type": "text"
      },
      "source": [
        "Get the input tensors and output tensors"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "0aqjlncsn2c3",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "input_tensors, input_word2index, input_index2word = get_tensors_dicts(ar)\n",
        "trget_tensors, trget_word2index, trget_index2word = get_tensors_dicts(en)"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ctm4_VlTmOfP",
        "colab_type": "text"
      },
      "source": [
        "Create the dataset"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "kpVlpeOhpFjo",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "BATCH_SIZE = 128\n",
        "#random split\n",
        "input_tensor_train, input_tensor_valid, trget_tensor_train, trget_tensor_valid = train_test_split(input_tensors, trget_tensors, test_size=0.1)\n",
        "\n",
        "#training dataset\n",
        "train_dataset = tf.data.Dataset.from_tensor_slices((input_tensor_train, trget_tensor_train)).shuffle(len(input_tensor_train))\n",
        "train_dataset = train_dataset.batch(BATCH_SIZE, drop_remainder=True)\n",
        "\n",
        "#validation dataset\n",
        "valid_dataset = tf.data.Dataset.from_tensor_slices((input_tensor_valid, trget_tensor_valid)).shuffle(len(input_tensor_valid))\n",
        "valid_dataset = valid_dataset.batch(BATCH_SIZE, drop_remainder=True)"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "180JcjjMmpQz",
        "colab_type": "text"
      },
      "source": [
        "## Create Models\n",
        "Instintiate some variables"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "2eiTtl-imwON",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "units = 1024\n",
        "embedding_dim = 256\n",
        "\n",
        "input_vocab_size = len(input_index2word)\n",
        "trget_vocab_size = len(trget_index2word)\n",
        "\n",
        "input_max_length = input_tensor_train.shape[1]\n",
        "trget_max_length = trget_tensor_train.shape[1]"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "dcADa-3dm6Lv",
        "colab_type": "text"
      },
      "source": [
        "## Attention Mechanism\n",
        "\n",
        "As we see from the figure the encoder hidden states $H = [h_1, h_2, \\cdots , h_n]$ and the final hidden state $h_n$ are processed by a special network called the attention network. This results in attention weights which are values between 0 and 1 that tell us which hidden states are most important to us at each stage of the decoder. In this notebook use the following network \n",
        "\n",
        "$$\\text{Attention Network} = \\text{softmax}(V(\\tanh(W_1(H)+ W_2(h))))$$\n",
        "\n",
        "Where $W_1,W_2$ and $V$ are dense layers with $units, units$ and $1$ neurons respectively. This results in an output tensor of size $[\\text{batch_sz}, n, 1]$  called the attention_weights. Then the attention weights are multiplied element wise by $H$ to generate the context vector\n",
        "\n",
        "$$\\text{Context Vector} = \\text{attention_weights} \\odot H$$\n",
        "Finally the context vector is concatenated by the embedded input vector to the decoder."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Lu0e9nuLj8kL",
        "colab_type": "text"
      },
      "source": [
        "![alt text](https://raw.githubusercontent.com/zaidalyafeai/AttentioNN/master/images/attention.png)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "hBKyplPfnBrH",
        "colab_type": "text"
      },
      "source": [
        "### Encoder"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "zsUr6BWIqS_Y",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "def gru(units):\n",
        "  return tf.keras.layers.CuDNNGRU(units, \n",
        "                             return_sequences=True, \n",
        "                             return_state=True, \n",
        "                             recurrent_initializer='glorot_uniform')\n",
        "\n",
        "def get_encoder(vocab_size, embedding_dim, enc_units, batch_sz):\n",
        "  \n",
        "    input = tf.keras.layers.Input((input_max_length,))\n",
        "    \n",
        "    # apply embedding output [batch_sz, input_max_length, embedding_dim]\n",
        "    x = tf.keras.layers.Embedding(vocab_size, embedding_dim)(input)\n",
        "    \n",
        "    # apply gru output x:[batch_sz, input_max_length, units] h:[batch_sz, units]\n",
        "    x, h = gru(units)(x)\n",
        "    \n",
        "    return tf.keras.models.Model(inputs = input, outputs = [x, h])"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "0lebNKA7nFg4",
        "colab_type": "text"
      },
      "source": [
        "### Decoder"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "kOBqbMniraMF",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "def get_decoder(vocab_size, embedding_dim, units, batch_sz):\n",
        "  \n",
        "  enc_output = tf.keras.layers.Input((input_max_length, units))\n",
        "  enc_hidden = tf.keras.layers.Input((units,))\n",
        "  dec_input = tf.keras.layers.Input((1,))\n",
        "\n",
        "  W1 = tf.keras.layers.Dense(units)\n",
        "  W2 = tf.keras.layers.Dense(units)\n",
        "  V = tf.keras.layers.Dense(1)\n",
        "      \n",
        "  x = tf.keras.layers.Embedding(vocab_size, embedding_dim)(dec_input)\n",
        "  \n",
        "  #1. attention network output [batch_sz, input_max_length, 1]\n",
        "  score = V(tf.nn.tanh(W1(enc_output) + W2(tf.expand_dims(enc_hidden, axis = 1))))\n",
        "\n",
        "  #2. attention weights output [batch_sz, input_max_length , 1]\n",
        "  attention_weights = tf.nn.softmax(score, axis = 1)\n",
        "\n",
        "  #3. context_vector output [batch_sz, 1, units]\n",
        "  context_vector = attention_weights * enc_output\n",
        "  context_vector = tf.reduce_sum(context_vector, axis=1, keepdims = True)\n",
        "  \n",
        "  #3. concatenate with the output [batch_sz, 1, units + embedding_dim]\n",
        "  x = tf.concat([x, context_vector], axis = -1)\n",
        "  \n",
        "  #4. apply GRU output x:[batch_sz, 1, units] h:[batch_sz, units]\n",
        "  x, h = gru(units)(x)\n",
        "  \n",
        "  #5. reshape and dense output [batch_sz, vocab_size]\n",
        "  x = tf.reduce_sum(x, axis = 1)\n",
        "  output = tf.keras.layers.Dense(vocab_size)(x)\n",
        " \n",
        "  return tf.keras.models.Model(inputs = [dec_input, enc_hidden, enc_output], outputs = [output, h, attention_weights])"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "rBRZUsiW2Nhq",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "encoder = get_encoder(input_vocab_size, embedding_dim, units, BATCH_SIZE)\n",
        "decoder = get_decoder(trget_vocab_size, embedding_dim, units, BATCH_SIZE)"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ZKy5GkHknL_p",
        "colab_type": "text"
      },
      "source": [
        "## Loss function"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Xl4X9GGr3KWW",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "optimizer = tf.train.AdamOptimizer()\n",
        "\n",
        "def loss_function(real, pred):\n",
        "  #here we mask out the 0 index because it doesn't participate in the translation process\n",
        "  mask = 1 - np.equal(real, 0)\n",
        "  loss_ = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=real, logits=pred, ) * mask\n",
        "  return tf.reduce_mean(loss_)"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "pRsAuHdyoTmT",
        "colab_type": "text"
      },
      "source": [
        "## Training"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "3AF1lJol3VAO",
        "colab_type": "code",
        "outputId": "26a15c0d-c3f2-4077-e1d7-c8976491eaf3",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 867
        }
      },
      "source": [
        "import time\n",
        "\n",
        "EPOCHS = 10\n",
        "\n",
        "for epoch in range(EPOCHS):\n",
        "    start = time.time()\n",
        "    \n",
        "    total_loss = 0\n",
        "    \n",
        "    #loop over the training tensors \n",
        "    for (batch, (inp, targ)) in enumerate(train_dataset):\n",
        "      \n",
        "        loss = 0\n",
        "        with tf.GradientTape() as tape:\n",
        "          \n",
        "            #feed the encoder \n",
        "            enc_output, enc_hidden = encoder(inp)\n",
        "            \n",
        "            #create the initial input to the decoder \n",
        "            dec_input = tf.expand_dims([trget_word2index['<s>']] * BATCH_SIZE, 1)      \n",
        "            \n",
        "            # Teacher forcing - feeding the target as the next input\n",
        "            for t in range(1, targ.shape[1]):\n",
        "              \n",
        "                # passing enc_output to the decoder\n",
        "                predictions, enc_hidden, _ = decoder([dec_input, enc_hidden, enc_output])\n",
        "                \n",
        "                # evaluate the loss\n",
        "                loss += loss_function(targ[:, t], predictions)                \n",
        "                \n",
        "                # evaluate the next input \n",
        "                dec_input = tf.expand_dims(targ[:, t], 1)\n",
        "        \n",
        "        #calculate the loss \n",
        "        batch_loss = (loss / int(targ.shape[1]))\n",
        "        total_loss += batch_loss\n",
        "        \n",
        "        # backprop\n",
        "        variables = encoder.variables + decoder.variables\n",
        "        gradients = tape.gradient(loss, variables)\n",
        "        optimizer.apply_gradients(zip(gradients, variables))\n",
        "        \n",
        "        # show ever 100 batch \n",
        "        if batch % 100 == 0:\n",
        "            print('Epoch {} Batch {} Loss {:.4f}'.format(epoch + 1,\n",
        "                                                         batch,\n",
        "                                                         batch_loss.numpy()))\n",
        "        N_BATCH = batch\n",
        "    \n",
        "    #show accumulative loss \n",
        "    print('Epoch {} Train Loss {:.4f}'.format(epoch + 1,\n",
        "                                        total_loss / N_BATCH))\n",
        "    \n",
        "    # enumerate the validation data\n",
        "    total_loss = 0\n",
        "    for (batch, (inp, targ)) in enumerate(valid_dataset):\n",
        "      \n",
        "        loss = 0\n",
        "        \n",
        "        # feed encoder \n",
        "        enc_output, enc_hidden = encoder(inp)\n",
        "\n",
        "        # initial input to the decoder \n",
        "        dec_input = tf.expand_dims([trget_word2index['<s>']] * BATCH_SIZE, 1)      \n",
        "\n",
        "        # Teacher forcing - feeding the target as the next input\n",
        "        for t in range(1, targ.shape[1]):\n",
        "\n",
        "            # passing enc_output to the decoder\n",
        "            predictions, enc_hidden, _ = decoder([dec_input, enc_hidden, enc_output])\n",
        "            loss += loss_function(targ[:, t], predictions)                \n",
        "\n",
        "            # using teacher forcing\n",
        "            dec_input = tf.expand_dims(targ[:, t], 1)\n",
        "        \n",
        "        # evaluate the loss \n",
        "        batch_loss = (loss / int(targ.shape[1]))\n",
        "        total_loss += batch_loss\n",
        "        \n",
        "        N_BATCH = batch\n",
        "    \n",
        "    #show accumulative loss \n",
        "    print('Epoch {} Valid Loss {:.4f}'.format(epoch + 1,\n",
        "                                        total_loss / N_BATCH))\n",
        "    \n",
        "    print('Time taken for 1 epoch {} sec\\n'.format(time.time() - start))\n",
        "    "
      ],
      "execution_count": 40,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "Epoch 1 Batch 0 Loss 0.0911\n",
            "Epoch 1 Train Loss 0.0865\n",
            "Epoch 1 Valid Loss 0.8554\n",
            "Time taken for 1 epoch 126.60913157463074 sec\n",
            "\n",
            "Epoch 2 Batch 0 Loss 0.0695\n",
            "Epoch 2 Train Loss 0.0706\n",
            "Epoch 2 Valid Loss 0.8668\n",
            "Time taken for 1 epoch 126.70999240875244 sec\n",
            "\n",
            "Epoch 3 Batch 0 Loss 0.0566\n",
            "Epoch 3 Train Loss 0.0552\n",
            "Epoch 3 Valid Loss 0.8916\n",
            "Time taken for 1 epoch 125.95465111732483 sec\n",
            "\n",
            "Epoch 4 Batch 0 Loss 0.0507\n",
            "Epoch 4 Train Loss 0.0432\n",
            "Epoch 4 Valid Loss 0.8969\n",
            "Time taken for 1 epoch 126.47470831871033 sec\n",
            "\n",
            "Epoch 5 Batch 0 Loss 0.0361\n",
            "Epoch 5 Train Loss 0.0336\n",
            "Epoch 5 Valid Loss 0.8937\n",
            "Time taken for 1 epoch 126.90233731269836 sec\n",
            "\n",
            "Epoch 6 Batch 0 Loss 0.0336\n",
            "Epoch 6 Train Loss 0.0257\n",
            "Epoch 6 Valid Loss 0.9044\n",
            "Time taken for 1 epoch 126.45854306221008 sec\n",
            "\n",
            "Epoch 7 Batch 0 Loss 0.0206\n",
            "Epoch 7 Train Loss 0.0207\n",
            "Epoch 7 Valid Loss 0.9150\n",
            "Time taken for 1 epoch 127.0796217918396 sec\n",
            "\n",
            "Epoch 8 Batch 0 Loss 0.0161\n",
            "Epoch 8 Train Loss 0.0170\n",
            "Epoch 8 Valid Loss 0.9272\n",
            "Time taken for 1 epoch 126.14989590644836 sec\n",
            "\n",
            "Epoch 9 Batch 0 Loss 0.0155\n",
            "Epoch 9 Train Loss 0.0144\n",
            "Epoch 9 Valid Loss 0.9384\n",
            "Time taken for 1 epoch 126.11995792388916 sec\n",
            "\n",
            "Epoch 10 Batch 0 Loss 0.0119\n",
            "Epoch 10 Train Loss 0.0126\n",
            "Epoch 10 Valid Loss 0.9333\n",
            "Time taken for 1 epoch 126.57300758361816 sec\n",
            "\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "6YYvP2cEoXNi",
        "colab_type": "text"
      },
      "source": [
        "## Test"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "tm_qC-bI7q4Q",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "def translate(sentence):\n",
        "  \n",
        "    # save the weights \n",
        "    attention_dict = {}\n",
        "    \n",
        "    # preprocess a sentence \n",
        "    sentence = '<s> '+sentence+' <e>'\n",
        "    sentence = sentence.split(' ')\n",
        "    sentence_length = len(sentence)\n",
        "    inputs = [input_word2index[i] for i in sentence]\n",
        "    inputs = tf.keras.preprocessing.sequence.pad_sequences([inputs], maxlen = input_max_length, padding='post')\n",
        "    inputs = tf.convert_to_tensor(inputs)\n",
        "    \n",
        "    result = ''\n",
        "\n",
        "    #feed encoder \n",
        "    enc_out, enc_hidden = encoder(inputs)\n",
        "\n",
        "    # prepare first input to the decoder \n",
        "    dec_input = tf.expand_dims([trget_word2index['<s>']], 0)\n",
        "    attention_matrix = np.empty(shape = (1, input_max_length))\n",
        "    \n",
        "    for t in range(trget_max_length):\n",
        "        \n",
        "        # feed decoder \n",
        "        predictions, enc_hidden, attention_weights = decoder([dec_input, enc_hidden, enc_out])\n",
        "\n",
        "        # predict next word \n",
        "        predicted_id = tf.argmax(predictions[0]).numpy()\n",
        "\n",
        "        result += trget_index2word[predicted_id] + ' '\n",
        "        \n",
        "        # save the weights \n",
        "        attention_weights = attention_weights.numpy().reshape((input_max_length,))[:sentence_length]\n",
        "        attention_dict[trget_index2word[predicted_id]] = attention_weights\n",
        "        \n",
        "        # exit on end token \n",
        "        if trget_index2word[predicted_id] == '<e>':\n",
        "            sentence = [get_display(arabic_reshaper.reshape(word)) for word in sentence]\n",
        "            df = pd.DataFrame(attention_dict, index = sentence)\n",
        "            return result, df\n",
        "        \n",
        "        # the predicted ID is fed back into the model\n",
        "        dec_input = tf.expand_dims([predicted_id], 0)\n",
        "        \n",
        "    df = pd.DataFrame(attention_dict, index = sentence)\n",
        "    return result, df"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "ciC-vtiz-x4m",
        "colab_type": "code",
        "outputId": "1960096c-2d45-43af-d643-80316cdfdbb6",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 34
        }
      },
      "source": [
        "stmt = \"انا اعلم اين تسكن\"\n",
        "result, df = translate(stmt)\n",
        "print(result)"
      ],
      "execution_count": 52,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "i know where you live <e> \n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "qh4nCcDvSEmX",
        "colab_type": "code",
        "outputId": "16d1d3ff-07ae-419e-ff95-114f12f9ea1e",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 612
        }
      },
      "source": [
        "plt.figure(figsize=(20, 10))\n",
        "sns.heatmap(df, square = True)"
      ],
      "execution_count": 53,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "<matplotlib.axes._subplots.AxesSubplot at 0x7eff4a982b00>"
            ]
          },
          "metadata": {
            "tags": []
          },
          "execution_count": 53
        },
        {
          "output_type": "display_data",
          "data": {
            "image/png": "iVBORw0KGgoAAAANSUhEUgAAAq8AAAJCCAYAAADuuy7BAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDMuMC4zLCBo\ndHRwOi8vbWF0cGxvdGxpYi5vcmcvnQurowAAIABJREFUeJzt3X3UpXdZH/rvNZMEbMRATURIQgk1\nKEGEaqAqhqoVTFtqaEUJVhFER7tAtBUsao+t4dAleg7SnoZlR15KWdKYI2+zlDZiFUjlKBmDEJKS\nEgKapGm1gLDkJcnkuc4fe4/sDM/7PPvlN8/nk7XX7H3ve+/7eu558jy/+e7r/v2quwMAACM4sOwC\nAABguwxeAQAYhsErAADDMHgFAGAYBq8AAAzD4BUAgGEYvAIAMAyDVwAAhmHwCgDAME6b+wHOONcS\nXgtwoGrZJcCeswLgYpxx2unLLmHfuOvYPcsuYd84dvcdp/Qvxnv+960L+wF5+tmPWKlzKXkFAGAY\nBq8AAAxj7m0DAADssbV7l13B0kheAQAYhuQVAGA0vbbsCpZG8goAwDAkrwAAo1mTvAIAwMqTvAIA\nDKb1vAIAwOqTvAIAjEbPKwAArD7JKwDAaPS8AgDA6jN4BQBgGNoGAABGs3bvsitYGskrAAAnpaou\nraqbq+qWqnrxBvt8d1XdVFU3VtUbZrZ/f1V9aHr7/q2OJXkFABjNCl2wVVUHk1yZ5MlJbk9yXVUd\n6e6bZva5MMlPJXlid3+iqr5suv2vJvkXSS5O0kn+cPraT2x0PMkrAAAn4wlJbunuW7v77iRXJbns\nhH1+KMmVxwel3f2n0+3fnuTt3f3x6XNvT3LpZgeTvAIAjGaBixRU1aEkh2Y2He7uwzOPz01y28zj\n25P8zRPe5pHT9/q9JAeT/Mvu/s8bvPbczeoxeAUAYEPTgerhLXfc3GlJLkzyzUnOS/KuqnrMbt8I\nAICB9Ar1vCa5I8n5M4/Pm26bdXuSP+jue5J8pKr+eyaD2TsyGdDOvvYdmx1MzysAACfjuiQXVtUF\nVXVGksuTHDlhn7dkOkitqrMzaSO4Nck1SZ5SVQ+qqgclecp024YkrwAAo1lgz+tWuvtYVT0/k0Hn\nwSSv6e4bq+qKJEe7+0g+P0i9Kcm9SV7U3R9Lkqp6SSYD4CS5ors/vtnxqrvn9bUkSU4749z5HoAk\nyYGqZZcAe27eP5+YOOO005ddwr5x17F7ll3CvnHs7jtO6V+Md33o3Qv7AXm/C79xpc6l5BUAYDSr\n1fO6UHpeAQAYhuQVAGA0a/cuu4KlkbwCADAMySsAwGj0vAIAwOozeAUAYBjaBgAARrNCixQsmuQV\nAIBhSF4BAEbjgi0AAFh9klcAgNHoeQUAgNUneQUAGEy35WEBAGDlSV4BAEZjtgEAAFh9klcAgNGY\nbQAAAFbftgevNfGWqnrUNvY9VFVHq+ro2tqnT65CAADuq9cWd1sxO0len5Lk8Ul+cKsdu/twd1/c\n3RcfOHDmrosDAIBZOxm8PjeTgevfryq9sgAAy7J27+JuK2Zbg9eqOjvJo7v7PyX57SRPm2tVAACw\nju0mr9+X5D9O778222gdAACAvbbdj/9/IMmlSdLd11XVQ6rq/O6+bX6lAQCwrhW8kGpRtkxeq+qB\nSf5td98xs/mFSc6eW1UAALCOLZPX7v7zJP/uhG1vn1tFAABsziIFm6uqX6iqL6mq06vqv1TVn1XV\n9867OAAAmLXdC7ae0t2fSvLUJB9N8hVJXjSvogAA2IRFCrZ0vL3g7yX5f7v7k3OqBwAANrTd2QZ+\no6o+mOSzSf5xVZ2T5HPzKwsAgA3ped1cd784yTcmubi770nymSSXzbMwAAA40ZbJa1X9lSQXdvf7\nZjZ/aZLVWy8MAGA/kLxu6p4kb6qqM2e2vSrJQ+ZTEgAArG8787zeU1VvTvLdSV5bVQ9Lck53H517\ndQAAfIHu/fsB+HZnG3hVkudM7z8ryWvnUw4AAGxsW7MNdPcHa+KRSS5Pcsl8ywIAYEN6Xrfl1Zkk\nsDd09yfmVA8AAGxoJ4PXq5M8NpNBLAAAy7KPV9ja7iIF6e7PJDlrjrUAAMCmdpK8AgDAUm07eQUA\nYEW4YAsAAFaf5BUAYDQreCHVokheAQAYhuQVAGA0el4BAGD1SV4BAEaj5xUAAFaf5BUAYDR6XgEA\nYPVJXgEARiN5BQCA1Sd5BQAYjdkGAABg9UleAQBGo+cVAABWn8ErAADD0DYAADAaF2wBAMDqk7wC\nAIzGBVsAALD6JK8AAKPR8woAAKtP8nqK+KLT77fsEvaNT9/9uWWXAHvq7mP3LLsEYKf0vAIAwOqT\nvAIAjEbyCgAAq0/yCgAwmu5lV7A0klcAAIYheQUAGI2eVwAAWH2SVwCA0UheAQBg9UleAQBG05JX\nAABYeQavAAAMQ9sAAMBoXLAFAACrT/IKADAay8MCAMDqk7wCAIxGzysAAKw+ySsAwGgkrwAAsPok\nrwAAo7E8LAAArD7JKwDAYHrNPK8AALDyJK8AAKMx2wAAAKw+ySsAwGjMNgAAAKvP4BUAgGFoGwAA\nGI2psgAAYPVJXgEARmOqLAAAWH2SVwCA0UheAQBg9UleAQBG02YbAACAlSd5BQAYjZ5XAABYfZJX\nAIDRWGELAABWn+QVAGA0recVAABWnuQVAGA0+7jnddPBa1Vdm2S9s1NJurufNJeqAABgHVslr9+7\nmzetqkNJDiVJHTwrBw6cuZu3AQCA+9h08Nrdf3zitqqq7s3XJOvuw0kOJ8lpZ5y7f3NtAIA5aIsU\nbK2q/sP07ruq6vaq+vk51QQAAOva9gVb3f2s6Z+XVNX9k9yR5MXzKgwAgA3s4wu2djtV1oGsfyEX\nAADMzVazDdyWLxyk1vT2inkVBQDAJvbxIgVbXbB1/qIKAQCArVikAABgNHpeAQBg9UleAQBGY55X\nAABYfZJXAIDR6HkFAIDdqapLq+rmqrqlqjZcxKqqvrOquqounj5+eFV9tqr+aHr75a2OJXkFABjN\nCs3zWlUHk1yZ5MlJbk9yXVUd6e6bTtjvAUl+LMkfnPAWH+7ux233eJJXAABOxhOS3NLdt3b33Umu\nSnLZOvu9JMnLknzuZA5m8AoAMJq1Xtitqg5V1dGZ26ETqjk3yW0zj2+fbvtLVfW1Sc7v7t9c56u5\noKreW1XvrKpLtvrStQ0AALCh7j6c5PBuX19VB5K8PMmz13n6ziQP6+6PVdXXJXlLVT26uz+10ftJ\nXgEAOBl3JDl/5vF5023HPSDJVyd5R1V9NMnXJzlSVRd3913d/bEk6e4/TPLhJI/c7GCSVwCAwfRq\nLVJwXZILq+qCTAatlyf5nuNPdvcnk5x9/HFVvSPJC7v7aFWdk+Tj3X1vVT0iyYVJbt3sYAavAADs\nWncfq6rnJ7kmycEkr+nuG6vqiiRHu/vIJi9/UpIrquqeJGtJfqS7P77Z8QxeAQBGs2KLFHT325K8\n7YRtP7vBvt88c/+NSd64k2PpeQUAYBiSVwCA0axY8rpIklcAAIYheQUAGM0KLQ+7aJJXAACGIXkF\nABiNnlcAAFh9klcAgMG05BUAAFaf5BUAYDSSVwAAWH2SVwCA0ayZ5xUAAFaewSsAAMPQNgAAMBoX\nbAEAwOqTvAIAjEbyCgAAq0/yCgAwmG7JKwAArDzJKwDAaPS8AgDA6pO8AgCMRvIKAACrT/J6irjz\n5d+x7BL2jS/78bcsu4R9465jdy+7hH1h/+Y3MK6WvAIAwOqTvAIAjEbyCgAAq0/yCgAwmrVlF7A8\nklcAAIZh8AoAwDC0DQAADMZUWQAAMADJKwDAaCSvAACw+iSvAACjMVUWAACsPskrAMBgzDYAAAAD\nkLwCAIxGzysAAKw+ySsAwGD0vAIAwAAkrwAAo9HzCgAAq0/yCgAwmJa8AgDA6jN4BQBgGNoGAABG\no20AAABWn+QVAGAwLtgCAIABSF4BAEYjeQUAgNUneQUAGIyeVwAAGIDkFQBgMJJXAAAYgOQVAGAw\nklcAABiA5BUAYDRdy65gaSSvAAAMQ/IKADAYPa8AADAAg1cAAIahbQAAYDC95oItAABYeZJXAIDB\nuGALAAAGsGnyWlW3Jen1nkrS3f2wuVQFAMCGeh8vUrDp4LW7z9/Nm1bVoSSHkqQOnpUDB87czdsA\nAMB9zKXntbsPJzmcJKedce56yS0AALuk53UHqurJVfXz8ygGAAA2s+3Ba1W9c3r3t5M8tar0AgAA\nLEGv1cJuq2Ynyes9VfX1Sb4oyQOTfM98SgIAgPXtZPD6k0lem+SmJC9J8o/mUhEAAJvqXtxt1Wz7\ngq3uvj7Jo44/rqoXzaUiAADYwK5mG6iqA0kevrelAACwHavYi7ooWy1ScG3WX6Tgy5NcO5eKAABg\nA1slr9+7wfbzkrxrj2sBAGAbJK8b6O4/PnFbVR1M8pQkH59XUQAAsJ7d9Lz+cpLPJbl3j2sBAIBN\n7Wbw+s1JvjbJP9zbUgAA2I5VnMJqUXa8PGyS/zPJbUmu2+NaAABgUzsevHb365J8WZLPVtVrq+o1\nVfVX9740AADWs5+Xh93VPK/dfXdV/XCSSzNZLvZze1oVAACsY1eD1yTp7k8luXoPawEAYBu6Vy8R\nXZTd9LwCAMBS7Dp5BQBgOXpt2RUsj+QVAIBhSF4BAAazpucVAABWn+QVAGAwZhsAAIABSF4BAAaz\niitfLYrkFQCAYUheAQAG073sCpZH8goAwDAMXgEAGIa2AQCAwbhgCwAABiB5BQAYjOVhAQBgAJJX\nAIDBWB4WAAAGIHkFABiMRQoAAGAAklcAgMGYbQAAAAYgeQUAGIzZBgAAYACSVwCAwZhtAAAABiB5\nBQAYjNkGAABgAJLXU8Rf/Nr1yy5h33jTA/7mskvYN57x6aPLLmFf+Oyxu5ddwr5xbO3eZZfAKcJs\nAwAAMACDVwAAhqFtAABgMC7YAgCAAUheAQAGs4/XKJC8AgAwDoNXAIDBrHUt7LYdVXVpVd1cVbdU\n1YvXef5HquqGqvqjqvqvVXXRzHM/NX3dzVX17Vsdy+AVAIBdq6qDSa5M8neSXJTkmbOD06k3dPdj\nuvtxSX4hycunr70oyeVJHp3k0iSvnL7fhgxeAQAG010Lu23DE5Lc0t23dvfdSa5Kctl96+1PzTw8\nM59v270syVXdfVd3fyTJLdP325DBKwAAG6qqQ1V1dOZ26IRdzk1y28zj26fbTnyf51XVhzNJXl+w\nk9fOMtsAAMBg1hZ4rO4+nOTwHrzPlUmurKrvSfLPk3z/bt5H8goAwMm4I8n5M4/Pm27byFVJnrbL\n1xq8AgCMplMLu23DdUkurKoLquqMTC7AOjK7Q1VdOPPw7yX50PT+kSSXV9X9quqCJBcmec9mB9M2\nAADArnX3sap6fpJrkhxM8pruvrGqrkhytLuPJHl+VX1bknuSfCLTloHpflcnuSnJsSTP6+57Nzue\nwSsAwGDWVmyJre5+W5K3nbDtZ2fu/9gmr31pkpdu91jaBgAAGIbkFQBgMGvb60U9JUleAQAYhsEr\nAADD0DYAADCYbU5hdUqSvAIAMAzJKwDAYBa5POyqkbwCADAMySsAwGD0vAIAwAAkrwAAg9HzCgAA\nA5C8AgAMRvIKAAADkLwCAAzGbAMAADAAySsAwGDW9m/wKnkFAGAcklcAgMGs6XkFAIDVZ/AKAMAw\ntA0AAAyml13AEkleAQAYhuQVAGAwlocFAIABSF4BAAazVqbKAgCAlSd5BQAYzH6ebWDLwWtVfetm\nz3f37+xdOQAAsLHtJK/ft8lzneQLBq9VdSjJoSSpg2flwIEzd1cdAABfYD/PNrDl4LW7n7PVPlV1\n/+7+3MxrDic5nCSnnXHufk62AQDYQ7u6YKuqrqiqn6iq86ebXrGHNQEAsIm1Wtxt1ex2toEPJzl9\n+meS6AsAAGDudjzbQFUdH6j+3SSnVdVl3b1ZXywAAHtoLSsYiS7IbqbKeniSByf5V0kekeQVVXWg\nu9+8l4UBAMCJdjx47e4bk9x4/HFVXZvklUkMXgEAFmA/Xw2/457XqnrJ7OPuviHJQ/esIgAA2MBu\nLtj6ldkHVXVBkj/dm3IAAGBju2kb+JMkqaqfS3JRkq9N8k/2uC4AADawilNYLcpuLtg67mVJvirJ\nnd195x7VAwAAG9r14LW7P5Pk+j2sBQCAbdjPy8PudpECAABYuJNpGwAAYAlMlQUAAAOQvAIADGY/\nzzYgeQUAYBiSVwCAwZhtAAAABiB5BQAYjOQVAAAGIHkFABhMm20AAABWn+QVAGAwel4BAGAABq8A\nAAxD2wAAwGC0DQAAwAAkrwAAg+llF7BEklcAAIYheQUAGMyaRQoAAGD1SV4BAAZjtgEAABiA5BUA\nYDCSVwAAGIDkFQBgMOZ5BQCAAUheAQAGY55XAAAYgOQVAGAwZhsAAIABGLwCADAMbQMAAIMxVRYA\nAAxA8nqKePwffWLZJewbt9z82mWXsG/86qN/etkl7Atfff6fLbuEfeORH/jQskvgFLG2j7NXySsA\nAMOQvAIADMZUWQAAMADJKwDAYPZvx6vkFQCAgUheAQAGo+cVAAAGIHkFABjMWi27guWRvAIAMAzJ\nKwDAYKywBQAAA5C8AgAMZv/mrpJXAAAGYvAKAMAwtA0AAAzGIgUAADAAySsAwGBMlQUAAAOQvAIA\nDGb/5q6SVwAABiJ5BQAYjNkGAABgAJJXAIDBmG0AAAAGIHkFABjM/s1dJa8AAAxE8goAMBizDQAA\nwAAkrwAAg+l93PUqeQUAYBgGrwAADEPbAADAYFywBQAAA5C8AgAMxvKwAAAwAMkrAMBg9m/uKnkF\nAGAgklcAgMHoeQUAgAFIXgEABmOeVwAAGIDkFQBgMK3nFQAAVt+2B69V9b3zLAQAgO1ZW+Bt1ewk\neT0nSarq3VV1Y1U9b041AQDAunbS83p6knT3N1bVg5PcnuTK9XasqkNJDiVJHTwrBw6cebJ1AgAw\nped1G7r7F2Ye/u8kBzfZ93B3X9zdFxu4AgCc2qrq0qq6uapuqaoXr/P8k6rq+qo6VlVPP+G5e6vq\nj6a3I1sda0ezDVTV6zNZTvexSd66k9cCAHDqqaqDmXwa/+RMPpm/rqqOdPdNM7v9SZJnJ3nhOm/x\n2e5+3HaPt9Opsl6VpJJckOQnd/haAAD2wIpdSPWEJLd0961JUlVXJbksyV8OXrv7o9PnTrr0HU2V\n1d3vTHJHJulrnezBAQAY3rlJbpt5fPt023bdv6qOVtXvV9XTttp5N4sUvC7JVyV55S5eCwDASVrr\nxV2wNXsh/tTh7j68h4f4a919R1U9IsnvVNUN3f3hjXbe8eB1OtvAmUluTvLPT6JQAABW3HSgutlg\n9Y4k5888Pm+6bbvvf8f0z1ur6h1J/kaSDQevO15hq6qem+SXkvzBTl8LAMDJ6wXetuG6JBdW1QVV\ndUaSy5NsOWtAklTVg6rqftP7Zyd5YmZ6Zdezm+Vhz0vyoCT328VrAQA4hXT3sSTPT3JNkv+W5Oru\nvrGqrqiq70iSqnp8Vd2e5LuS/LuqunH68kclOVpV70vyu0l+/oRZCr7AbtoGfm5axD07fS0AACdv\nbcUWKejutyV52wnbfnbm/nWZBKAnvu7dSR6zk2NtmrxW1VdX1SPX2X4gyf/cyYEAAOBkbdU2cHqS\nl89uqKovSnJ1kjfOqygAADbWC/xv1Ww6eO3u9yZ5YFVdkCRVdV6Sd2cy8ewH5l8eAAB83nYu2Prl\nJD9RVU/MpJH2iiQPiWmyAACWYm2Bt1WznQu2rk7yw0m+KcnTuvvGJKmqs+ZZGAAAnGjLwWt3353k\nkiSpqjOr6llJfjDJWlVd1t1vnXONAADMWLXZBhZpp1NlPTzJlyf5V0kekeQVVXWgu9+814UBAMCJ\ndjR4nbYMHJ9UNlV1bZJXJjF4BQBYkFWcBWBRdrTCVlW9ZPZxd9+Q5KF7WhEAAGxgp8vD/srsg+kU\nWn+6d+UAAMDGdto28CdJUlU/l+SiJF+b5J/MoS4AADawilNYLcpOL9g67mVJvirJnd195x7WAwAA\nG9rV4LW7P5Pk+j2uBQCAbeh2wRYAAKy83bYNAACwJPt5kQLJKwAAw5C8AgAMZj/PNiB5BQBgGJJX\nAIDBWB4WAAAGIHkFABiM2QYAAGAAklcAgMFYYQsAAAYgeQUAGIx5XgEAYACSVwCAwZjnFQAABmDw\nCgDAMLQNAAAMxiIFAAAwAMkrAMBgLFIAAAADkLwCAAxGzysAAAxA8nqKuPMvPr7sEvaNh33FU5dd\nwr5xwzecs+wS9oWPf/SLll3CvvH8L3/iskvgFGGRAgAAGIDkFQBgMGtmGwAAgNUneQUAGMz+zV0l\nrwAADETyCgAwGPO8AgDAACSvAACDkbwCAMAADF4BABiGtgEAgMG0RQoAAGD1SV4BAAbjgi0AABiA\n5BUAYDAteQUAgNUneQUAGIzZBgAAYACSVwCAwZhtAAAABiB5BQAYjJ5XAAAYgOQVAGAwel4BAGAA\nklcAgMFYYQsAAAZg8AoAwDC0DQAADGbNVFkAALD6JK8AAINxwRYAAAxA8goAMBg9rwAAMADJKwDA\nYPS8AgDAACSvAACD0fMKAAADkLwCAAxGzysAAAxA8goAMBg9rwAAMADJKwDAYPS8AgDAAAxeAQAY\nxo4Gr1X1iqo6fxv7Haqqo1V1dG3t07uvDgCAL9C9trDbqtn24LWqnpjk2Umeu9W+3X24uy/u7osP\nHDjzJMoDAIDP20ny+gNJnpfk8qqqOdUDAMAW1tILu62abQ1eq+oBSS5J8oYk1yX59nkWBQAA69lu\n8np5kjd1dyd5TbbROgAAwHx098Juq2a7g9cfTPLqJOnu303ymKr60rlVBQAA69hykYKqemCS3+7u\nD81sviLJVyZ597wKAwBgfavYi7ooWw5eu/vPk/zMCdveMLeKAABgA9u9YOvBVfXqqvpP08cXVZW+\nVwCAJdDzurV/n+SaJA+dPv7vSX58HgUBAMBGtjt4Pbu7r06yliTdfSzJvXOrCgCADa11L+y2arY7\neP30dHaBTpKq+vokn5xbVQAAsI4tL9ia+qdJjiT561X1e0nOSfL0uVUFAMCG2mwDm+vu66vqb2Uy\nPVYlubm775lrZQAAcILtzPP6V5Jc2N3vS3LjdNvDqure7r5j3gUCAHBfqzgLwKJsp+f1niRvqqoz\nZ7a9KslD5lMSAACsb8vB67Q94M1JvjuZpK5Jzunuo3OuDQAA7mO7sw28KslzpvefleS18ykHAICt\nrKUXdls1271g64M18cgklye5ZL5lAQDAF9ruVFlJ8upMEtgbuvsTc6oHAIAtuGBre65O8thMBrEA\nALBw205eu/szSc6aYy0AAGzDKi7buig7SV4BAGCpdtLzCgDACtDzCgAAA5C8AgAMZhXnX10UySsA\nAMOQvAIADEbPKwAADEDyCgAwGPO8AgDAACSvAACDabMNAADA6jN4BQBgGNoGAAAG44ItAAAYgOQV\nAGAwFikAAIABSF4BAAZjqiwAABiA5BUAYDB6XgEAYAAGrwAAg+nuhd22o6ouraqbq+qWqnrxOs/f\nr6p+bfr8H1TVw2ee+6np9pur6tu3OpbBKwAAu1ZVB5NcmeTvJLkoyTOr6qITdntukk9091ck+aUk\nL5u+9qIklyd5dJJLk7xy+n4bMngFABhML/C2DU9Ickt339rddye5KsllJ+xzWZLXTe//epK/XVU1\n3X5Vd9/V3R9Jcsv0/TZk8AoAwMk4N8ltM49vn25bd5/uPpbkk0m+dJuvvY+5zzZw7O47at7HmIeq\nOtTdh5ddx6nOeV4c53oxRjzPZy+7gF0a8Vz/X8suYJdGPNenukWOr6rqUJJDM5sOL/P7QfK6sUNb\n78IecJ4Xx7leDOd5cZzrxXGu97HuPtzdF8/cThy43pHk/JnH5023rbtPVZ2W5KwkH9vma+/D4BUA\ngJNxXZILq+qCqjojkwuwjpywz5Ek3z+9//Qkv9OTqQyOJLl8OhvBBUkuTPKezQ5mkQIAAHatu49V\n1fOTXJPkYJLXdPeNVXVFkqPdfSTJq5O8vqpuSfLxTAa4me53dZKbkhxL8rzuvnez4xm8bkxvz2I4\nz4vjXC+G87w4zvXiONdsqrvfluRtJ2z72Zn7n0vyXRu89qVJXrrdY9V+Xl4MAICx6HkFAGAYBq/r\nqKp3L7uGUVXVw6vqA8uuY7+qqr9Ydg2wV45/P1fVQ6vq15ddD7AaDF7X0d3fuOwaYNGmU5fAyunu\n/9HdT192HftBVZ1RVWdusc+DFlUPrMfgdR3Sq71RVY+oqvdW1Yuq6k1V9Z+r6kNV9Qsz+zyzqm6o\nqg9U1fF1jr+rql4+vf9jVXXrzPv93nK+mtUwPZcvmN7/par6nen9b62qX53ef2lVva+qfr+qHjzd\ndk5VvbGqrpvenjjd/i+r6vXT8/r6qjpYVb843ef9VfXDS/pSV1ZVXVFVPz7z+KXT79NfnH4f31BV\nz5g+981V9Rsz+/7bqnr2Esoe2uwnOtPv60fPPPeOqrq4qs6sqtdU1XumP3dOXJqSTVTVo6rq/05y\nc5JHTrd9XVW9s6r+sKquqaqHTHd/0fQ8/3BVfcnSimbfMnhlLqrqK5O8Mcmzk/xZkscleUaSxyR5\nRlWdX1UPTfKyJN86ff7xVfW0JNcmuWT6Vpck+VhVnTu9/65Ffh0raPbcXJzki6vq9Hz+3JyZ5Pe7\n+7HTxz803fdfJ/ml7n58ku9M8qqZ97woybd19zOTPDfJJ6f7PT7JD03n3ePzXpPkWUlSVQcyme7l\n9ky+hx+b5NuS/OLML3r21q8l+e4kmZ7jh3T30SQ/k8m8kU9I8i2Z/B1smiDud9MB/3Oq6r8m+ZVM\npir6mu5+7/Tnyv+T5Ond/XWZfN+/NEm6+6eTfF+SRyS5vqpeW1XftJyvgv3Ix4TMwzlJ3prkH3b3\nTVX1N5L8l+7+ZJJU1U1J/lomaxq/o7v/bLr9V5M8qbvfUlVfXFUPyGTVjTckeVImA7Q3Lf7LWSl/\nmOTrpmnHXUmuz2QQe0mSFyS5O8lvzOz75On9b0tyUdVfrib4JVX1xdP7R7r7s9P7T0nyNVV1/CPa\nszKZMPoj8/lyxtPdH62qj01LBBolAAADFElEQVS/rx+c5L1JvinJf5zOTfi/quqdmQz+P7XEUk9V\nVyf5rST/IpNB7PFe2Kck+Y6qeuH08f2TPCzJf1t4heO4M8n7k/xgd3/whOe+MslXJ3n79OfGwen+\nSZLuvjnJP6uqn07yzCS/WVWv6+4XLKRy9jWDV+bhk0n+JJNf6DdNt9018/y92fp7791JnpPJR1jX\nJvmBJN+Q5Cf2tNLBdPc9VfWRTBLtd2fyi+dbknxFJr+k7+nPz383e54PJPn66Tx7f2n6S+nTs5uS\n/Gh3XzOvr+EU8apM/g6+PJNE6skb7Hcs9/2E6/7zLevU1913TP/x8DWZfJrzI9OnKsl3TgdVbM/T\nM/m05U1VdVWS13X3H0+fqyQ3dvc3rPfCmvzw+JZMfjY/Icm/yX0/0YG50TbAPNyd5B8keVZVfc8m\n+70nyd+qqrOr6mAm/3p/5/S5a5O8MJOPvt+byQ/Ju46nt/vc7Lm5NpNf3u+dGbSu57eS/OjxB1X1\nuA32uybJP55+ZJiqeqSPXtf15iSXZpKuXpPJ38Mzpj3D52TyScF7kvxxJon3/arqgUn+9rIKPsX8\nWpKfTHJWd79/uu2aJD86HVRlmoyzie7+re5+Riaf3HwyyVur6rer6uGZBAfnVNU3JElVnX6817iq\n/lGSDyZ5XiafjD2qu/+PmYEvzJXklbno7k9X1VOTvD3J6zfY586qenGS383kX/m/2d1vnT59bSYt\nA+/q7nur6rZMflgyOTc/k+T/m57nz023beYFSa6sqvdn8v/9u/L5xGrWq5I8PJM+tsqkX/lpe1X4\nqaK7766q303y59Pvzzdn8snA+5J0kp/s7v+ZJDVZ9vADmbRevHdZNZ9ifj2TPu6XzGx7SZJXJHn/\ntBf5I0meuoTahtPdH8vkfP7rqnpCknun3+NPT/JvquqsTH5uvCLJjZn8o+ybjrd8waJZYQtgh6aD\no+uTfFd3f2jZ9QDsJ9oGAHagqi5KcksmFyEauAIsmOQVAIBhSF4BABiGwSsAAMMweAUAYBgGrwAA\nDMPgFQCAYRi8AgAwjP8flxvqpqIiQeMAAAAASUVORK5CYII=\n",
            "text/plain": [
              "<Figure size 1440x720 with 2 Axes>"
            ]
          },
          "metadata": {
            "tags": []
          }
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "rTTZ5IzEx9jG",
        "colab_type": "text"
      },
      "source": [
        "# References\n",
        "\n",
        "1. https://www.tensorflow.org/beta/tutorials/text/nmt_with_attention\n",
        "2. https://medium.com/syncedreview/a-brief-overview-of-attention-mechanism-13c578ba9129"
      ]
    }
  ]
}