{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Creating Sequence to Sequence Models\n",
    "\n",
    "----------------------------------\n",
    "\n",
    "Here we show how to implement sequence to sequence models. Specifically, we will build an English to German translation model.\n",
    "\n",
    "The code for this section has been upgraded to use the \"Neural Machine Translation\" models provided by the official TensorFlow repositories here:\n",
    "\n",
    "https://github.com/tensorflow/nmt\n",
    "\n",
    "This project will show you how to download, use/modify/add to the hyperparameters, and configure your own data to use the project files.\n",
    "\n",
    "While the official tutorials show you how to do this via the commandline, this tutorial will show you how to use the internal code provided to train your own model from scratch.\n",
    "\n",
    "We start by loading the necessary libraries:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/usr/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88\n",
      "  return f(*args, **kwds)\n"
     ]
    }
   ],
   "source": [
    "import os\n",
    "import re\n",
    "import sys\n",
    "import json\n",
    "import math\n",
    "import time\n",
    "import string\n",
    "import requests\n",
    "import io\n",
    "import numpy as np\n",
    "import collections\n",
    "import random\n",
    "import pickle\n",
    "import string\n",
    "import matplotlib.pyplot as plt\n",
    "import tensorflow as tf\n",
    "from zipfile import ZipFile\n",
    "from collections import Counter\n",
    "from tensorflow.python.ops import lookup_ops\n",
    "from tensorflow.python.framework import ops\n",
    "ops.reset_default_graph()\n",
    "\n",
    "local_repository = 'temp/seq2seq'"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The following block of code will import the whole NMT `models` repository into the temp folder."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "# models can be retrieved from github: https://github.com/tensorflow/models.git\n",
    "# put the models dir under python search lib path.\n",
    "\n",
    "if not os.path.exists(local_repository):\n",
    "    from git import Repo\n",
    "    tf_model_repository = 'https://github.com/tensorflow/nmt/'\n",
    "    Repo.clone_from(tf_model_repository, local_repository)\n",
    "    sys.path.insert(0, 'temp/seq2seq/nmt/')\n",
    "\n",
    "# May also try to use 'attention model' by importing the attention model:\n",
    "# from temp.seq2seq.nmt import attention_model as attention_model\n",
    "from temp.seq2seq.nmt import model as model\n",
    "from temp.seq2seq.nmt.utils import vocab_utils as vocab_utils\n",
    "import temp.seq2seq.nmt.model_helper as model_helper\n",
    "import temp.seq2seq.nmt.utils.iterator_utils as iterator_utils\n",
    "import temp.seq2seq.nmt.utils.misc_utils as utils\n",
    "import temp.seq2seq.nmt.train as train"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Next we set some parameters about the vocabulary size, what punctuation we'll remove and where the data will be stored."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Model Parameters\n",
    "vocab_size = 10000\n",
    "punct = string.punctuation\n",
    "\n",
    "# Data Parameters\n",
    "data_dir = 'temp'\n",
    "data_file = 'eng_ger.txt'\n",
    "model_path = 'seq2seq_model'\n",
    "full_model_dir = os.path.join(data_dir, model_path)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We will use the 'hyper-parameter' format that TensorFlow provides.  This type of parameter storage (in external json or xml files) allows us to iterate through different types of architectures (in different files) programatically. For this demonstration, we will use the wmt16.json provided to us and make a few changes below:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Load hyper-parameters for translation model. (Good defaults are provided in Repository).\n",
    "hparams = tf.contrib.training.HParams()\n",
    "param_file = 'temp/seq2seq/nmt/standard_hparams/wmt16.json'\n",
    "# Can also try: (For different architectures)\n",
    "# 'temp/seq2seq/nmt/standard_hparams/iwslt15.json'\n",
    "# 'temp/seq2seq/nmt/standard_hparams/wmt16_gnmt_4_layer.json',\n",
    "# 'temp/seq2seq/nmt/standard_hparams/wmt16_gnmt_8_layer.json',\n",
    "\n",
    "with open(param_file, \"r\") as f:\n",
    "    params_json = json.loads(f.read())\n",
    "\n",
    "for key, value in params_json.items():\n",
    "    hparams.add_hparam(key, value)\n",
    "hparams.add_hparam('num_gpus', 0)\n",
    "hparams.add_hparam('num_encoder_layers', hparams.num_layers)\n",
    "hparams.add_hparam('num_decoder_layers', hparams.num_layers)\n",
    "hparams.add_hparam('num_encoder_residual_layers', 0)\n",
    "hparams.add_hparam('num_decoder_residual_layers', 0)\n",
    "hparams.add_hparam('init_op', 'uniform')\n",
    "hparams.add_hparam('random_seed', None)\n",
    "hparams.add_hparam('num_embeddings_partitions', 0)\n",
    "hparams.add_hparam('warmup_steps', 0)\n",
    "hparams.add_hparam('length_penalty_weight', 0)\n",
    "hparams.add_hparam('sampling_temperature', 0.0)\n",
    "hparams.add_hparam('num_translations_per_input', 1)\n",
    "hparams.add_hparam('warmup_scheme', 't2t')\n",
    "hparams.add_hparam('epoch_step', 0)\n",
    "hparams.num_train_steps = 5000\n",
    "\n",
    "# Not use any pretrained embeddings\n",
    "hparams.add_hparam('src_embed_file', '')\n",
    "hparams.add_hparam('tgt_embed_file', '')\n",
    "hparams.add_hparam('num_keep_ckpts', 5)\n",
    "hparams.add_hparam('avg_ckpts', False)\n",
    "\n",
    "# Remove attention\n",
    "hparams.attention = None"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Make the model and data directories if they do not exist already."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Make Model Directory\n",
    "if not os.path.exists(full_model_dir):\n",
    "    os.makedirs(full_model_dir)\n",
    "\n",
    "# Make data directory\n",
    "if not os.path.exists(data_dir):\n",
    "    os.makedirs(data_dir)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Next we load the english-german translation data.  We either load from disk, or from the internet if it doesn't exist on disk. (And save it for future use)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Loading English-German Data\n",
      "Data not found, downloading Eng-Ger sentences from www.manythings.org\n",
      "Done!\n"
     ]
    }
   ],
   "source": [
    "print('Loading English-German Data')\n",
    "# Check for data, if it doesn't exist, download it and save it\n",
    "if not os.path.isfile(os.path.join(data_dir, data_file)):\n",
    "    print('Data not found, downloading Eng-Ger sentences from www.manythings.org')\n",
    "    sentence_url = 'http://www.manythings.org/anki/deu-eng.zip'\n",
    "    r = requests.get(sentence_url)\n",
    "    z = ZipFile(io.BytesIO(r.content))\n",
    "    file = z.read('deu.txt')\n",
    "    # Format Data\n",
    "    eng_ger_data = file.decode()\n",
    "    eng_ger_data = eng_ger_data.encode('ascii', errors='ignore')\n",
    "    eng_ger_data = eng_ger_data.decode().split('\\n')\n",
    "    # Write to file\n",
    "    with open(os.path.join(data_dir, data_file), 'w') as out_conn:\n",
    "        for sentence in eng_ger_data:\n",
    "            out_conn.write(sentence + '\\n')\n",
    "else:\n",
    "    eng_ger_data = []\n",
    "    with open(os.path.join(data_dir, data_file), 'r') as in_conn:\n",
    "        for row in in_conn:\n",
    "            eng_ger_data.append(row[:-1])\n",
    "print('Done!')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now we remove punctuation and split up the translation data into lists of words for both the english and german sentences."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Remove punctuation\n",
    "eng_ger_data = [''.join(char for char in sent if char not in punct) for sent in eng_ger_data]\n",
    "# Split each sentence by tabs    \n",
    "eng_ger_data = [x.split('\\t') for x in eng_ger_data if len(x) >= 1]\n",
    "[english_sentence, german_sentence] = [list(x) for x in zip(*eng_ger_data)]\n",
    "english_sentence = [x.lower().split() for x in english_sentence]\n",
    "german_sentence = [x.lower().split() for x in german_sentence]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "In order to use the faster, data-pipeline functions from TensorFlow, we will want to write the formatted data to disk in an appropriate format.\n",
    "\n",
    "The format that the translation models expect are in the form:\n",
    "\n",
    " - train_prefix.source_suffix = train.en\n",
    " - train_prefix.target_suffix = train.de\n",
    " - etc.. the suffix will determine the language (en = english, de = deutsch), and the prefix determines the type of dataset (train, test)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [],
   "source": [
    "# We need to write them to separate text files for the text-line-dataset operations.\n",
    "train_prefix = 'train'\n",
    "src_suffix = 'en'  # English\n",
    "tgt_suffix = 'de'  # Deutsch (German)\n",
    "source_txt_file = train_prefix + '.' + src_suffix\n",
    "hparams.add_hparam('src_file', source_txt_file)\n",
    "target_txt_file = train_prefix + '.' + tgt_suffix\n",
    "hparams.add_hparam('tgt_file', target_txt_file)\n",
    "with open(source_txt_file, 'w') as f:\n",
    "    for sent in english_sentence:\n",
    "        f.write(' '.join(sent) + '\\n')\n",
    "\n",
    "with open(target_txt_file, 'w') as f:\n",
    "    for sent in german_sentence:\n",
    "        f.write(' '.join(sent) + '\\n')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Next we need to parse off some (~100) testing sentence translations.  We arbitrarily choose around 100 sentences.  Then we also write them to the appropriate files."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Partition some sentences off for testing files\n",
    "test_prefix = 'test_sent'\n",
    "hparams.add_hparam('dev_prefix', test_prefix)\n",
    "hparams.add_hparam('train_prefix', train_prefix)\n",
    "hparams.add_hparam('test_prefix', test_prefix)\n",
    "hparams.add_hparam('src', src_suffix)\n",
    "hparams.add_hparam('tgt', tgt_suffix)\n",
    "\n",
    "num_sample = 100\n",
    "total_samples = len(english_sentence)\n",
    "# Get around 'num_sample's every so often in the src/tgt sentences\n",
    "ix_sample = [x for x in range(total_samples) if x % (total_samples // num_sample) == 0]\n",
    "test_src = [' '.join(english_sentence[x]) for x in ix_sample]\n",
    "test_tgt = [' '.join(german_sentence[x]) for x in ix_sample]\n",
    "\n",
    "# Write test sentences to file\n",
    "with open(test_prefix + '.' + src_suffix, 'w') as f:\n",
    "    for eng_test in test_src:\n",
    "        f.write(eng_test + '\\n')\n",
    "\n",
    "with open(test_prefix + '.' + tgt_suffix, 'w') as f:\n",
    "    for ger_test in test_src:\n",
    "        f.write(ger_test + '\\n')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Next we process the vocabularies of both the english and german sentences.  Then we save the vocabulary lists to the appropriate files."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Processing the vocabularies.\n"
     ]
    }
   ],
   "source": [
    "print('Processing the vocabularies.')\n",
    "# Process the English Vocabulary\n",
    "all_english_words = [word for sentence in english_sentence for word in sentence]\n",
    "all_english_counts = Counter(all_english_words)\n",
    "eng_word_keys = [x[0] for x in all_english_counts.most_common(vocab_size-3)]  # -3 because UNK, S, /S is also in there\n",
    "eng_vocab2ix = dict(zip(eng_word_keys, range(1, vocab_size)))\n",
    "eng_ix2vocab = {val: key for key, val in eng_vocab2ix.items()}\n",
    "english_processed = []\n",
    "for sent in english_sentence:\n",
    "    temp_sentence = []\n",
    "    for word in sent:\n",
    "        try:\n",
    "            temp_sentence.append(eng_vocab2ix[word])\n",
    "        except KeyError:\n",
    "            temp_sentence.append(0)\n",
    "    english_processed.append(temp_sentence)\n",
    "\n",
    "\n",
    "# Process the German Vocabulary\n",
    "all_german_words = [word for sentence in german_sentence for word in sentence]\n",
    "all_german_counts = Counter(all_german_words)\n",
    "ger_word_keys = [x[0] for x in all_german_counts.most_common(vocab_size-3)]  # -3 because UNK, S, /S is also in there\n",
    "ger_vocab2ix = dict(zip(ger_word_keys, range(1, vocab_size)))\n",
    "ger_ix2vocab = {val: key for key, val in ger_vocab2ix.items()}\n",
    "german_processed = []\n",
    "for sent in german_sentence:\n",
    "    temp_sentence = []\n",
    "    for word in sent:\n",
    "        try:\n",
    "            temp_sentence.append(ger_vocab2ix[word])\n",
    "        except KeyError:\n",
    "            temp_sentence.append(0)\n",
    "    german_processed.append(temp_sentence)\n",
    "\n",
    "\n",
    "# Save vocab files for data processing\n",
    "source_vocab_file = 'vocab' + '.' + src_suffix\n",
    "hparams.add_hparam('src_vocab_file', source_vocab_file)\n",
    "eng_word_keys = ['<unk>', '<s>', '</s>'] + eng_word_keys\n",
    "\n",
    "target_vocab_file = 'vocab' + '.' + tgt_suffix\n",
    "hparams.add_hparam('tgt_vocab_file', target_vocab_file)\n",
    "ger_word_keys = ['<unk>', '<s>', '</s>'] + ger_word_keys\n",
    "\n",
    "# Write out all unique english words\n",
    "with open(source_vocab_file, 'w') as f:\n",
    "    for eng_word in eng_word_keys:\n",
    "        f.write(eng_word + '\\n')\n",
    "\n",
    "# Write out all unique german words\n",
    "with open(target_vocab_file, 'w') as f:\n",
    "    for ger_word in ger_word_keys:\n",
    "        f.write(ger_word + '\\n')\n",
    "\n",
    "# Add vocab size to hyper parameters\n",
    "hparams.add_hparam('src_vocab_size', vocab_size)\n",
    "hparams.add_hparam('tgt_vocab_size', vocab_size)\n",
    "\n",
    "# Add out-directory\n",
    "out_dir = 'temp/seq2seq/nmt_out'\n",
    "hparams.add_hparam('out_dir', out_dir)\n",
    "if not tf.gfile.Exists(out_dir):\n",
    "    tf.gfile.MakeDirs(out_dir)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We will be creating the training, inferring, and evaluation graphs separately next.\n",
    "\n",
    "First we create the training graph.  We do this with a class and make the arguments a named-tuple.  This code is from the nmt repository. See the file in the repository named 'model_helper.py' for more."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "# creating train graph ...\n",
      "  num_bi_layers = 2, num_bi_residual_layers=0\n",
      "  cell 0  LSTM, forget_bias=1  DropoutWrapper, dropout=0.2   DeviceWrapper, device=/cpu:0\n",
      "  cell 1  LSTM, forget_bias=1  DropoutWrapper, dropout=0.2   DeviceWrapper, device=/cpu:0\n",
      "  cell 0  LSTM, forget_bias=1  DropoutWrapper, dropout=0.2   DeviceWrapper, device=/cpu:0\n",
      "  cell 1  LSTM, forget_bias=1  DropoutWrapper, dropout=0.2   DeviceWrapper, device=/cpu:0\n",
      "WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/rnn.py:417: calling reverse_sequence (from tensorflow.python.ops.array_ops) with seq_dim is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "seq_dim is deprecated, use seq_axis instead\n",
      "WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/util/deprecation.py:432: calling reverse_sequence (from tensorflow.python.ops.array_ops) with batch_dim is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "batch_dim is deprecated, use batch_axis instead\n",
      "  cell 0  LSTM, forget_bias=1  DropoutWrapper, dropout=0.2   DeviceWrapper, device=/cpu:0\n",
      "  cell 1  LSTM, forget_bias=1  DropoutWrapper, dropout=0.2   DeviceWrapper, device=/cpu:0\n",
      "  cell 2  LSTM, forget_bias=1  DropoutWrapper, dropout=0.2   DeviceWrapper, device=/cpu:0\n",
      "  cell 3  LSTM, forget_bias=1  DropoutWrapper, dropout=0.2   DeviceWrapper, device=/cpu:0\n",
      "  learning_rate=1, warmup_steps=0, warmup_scheme=t2t\n",
      "  decay_scheme=luong10, start_decay_step=2500, decay_steps 250, decay_factor 0.5\n",
      "# Trainable variables\n",
      "  embeddings/encoder/embedding_encoder:0, (10000, 1024), /device:GPU:0\n",
      "  embeddings/decoder/embedding_decoder:0, (10000, 1024), /device:GPU:0\n",
      "  dynamic_seq2seq/encoder/bidirectional_rnn/fw/multi_rnn_cell/cell_0/basic_lstm_cell/kernel:0, (2048, 4096), /device:CPU:0\n",
      "  dynamic_seq2seq/encoder/bidirectional_rnn/fw/multi_rnn_cell/cell_0/basic_lstm_cell/bias:0, (4096,), /device:CPU:0\n",
      "  dynamic_seq2seq/encoder/bidirectional_rnn/fw/multi_rnn_cell/cell_1/basic_lstm_cell/kernel:0, (2048, 4096), /device:CPU:0\n",
      "  dynamic_seq2seq/encoder/bidirectional_rnn/fw/multi_rnn_cell/cell_1/basic_lstm_cell/bias:0, (4096,), /device:CPU:0\n",
      "  dynamic_seq2seq/encoder/bidirectional_rnn/bw/multi_rnn_cell/cell_0/basic_lstm_cell/kernel:0, (2048, 4096), /device:CPU:0\n",
      "  dynamic_seq2seq/encoder/bidirectional_rnn/bw/multi_rnn_cell/cell_0/basic_lstm_cell/bias:0, (4096,), /device:CPU:0\n",
      "  dynamic_seq2seq/encoder/bidirectional_rnn/bw/multi_rnn_cell/cell_1/basic_lstm_cell/kernel:0, (2048, 4096), /device:CPU:0\n",
      "  dynamic_seq2seq/encoder/bidirectional_rnn/bw/multi_rnn_cell/cell_1/basic_lstm_cell/bias:0, (4096,), /device:CPU:0\n",
      "  dynamic_seq2seq/decoder/multi_rnn_cell/cell_0/basic_lstm_cell/kernel:0, (2048, 4096), /device:CPU:0\n",
      "  dynamic_seq2seq/decoder/multi_rnn_cell/cell_0/basic_lstm_cell/bias:0, (4096,), /device:CPU:0\n",
      "  dynamic_seq2seq/decoder/multi_rnn_cell/cell_1/basic_lstm_cell/kernel:0, (2048, 4096), /device:CPU:0\n",
      "  dynamic_seq2seq/decoder/multi_rnn_cell/cell_1/basic_lstm_cell/bias:0, (4096,), /device:CPU:0\n",
      "  dynamic_seq2seq/decoder/multi_rnn_cell/cell_2/basic_lstm_cell/kernel:0, (2048, 4096), /device:CPU:0\n",
      "  dynamic_seq2seq/decoder/multi_rnn_cell/cell_2/basic_lstm_cell/bias:0, (4096,), /device:CPU:0\n",
      "  dynamic_seq2seq/decoder/multi_rnn_cell/cell_3/basic_lstm_cell/kernel:0, (2048, 4096), /device:CPU:0\n",
      "  dynamic_seq2seq/decoder/multi_rnn_cell/cell_3/basic_lstm_cell/bias:0, (4096,), /device:CPU:0\n",
      "  dynamic_seq2seq/decoder/output_projection/kernel:0, (1024, 10000), \n"
     ]
    }
   ],
   "source": [
    "class TrainGraph(collections.namedtuple(\"TrainGraph\", (\"graph\", \"model\", \"iterator\", \"skip_count_placeholder\"))):\n",
    "    pass\n",
    "\n",
    "\n",
    "def create_train_graph(scope=None):\n",
    "    graph = tf.Graph()\n",
    "    with graph.as_default():\n",
    "        src_vocab_table, tgt_vocab_table = vocab_utils.create_vocab_tables(hparams.src_vocab_file,\n",
    "                                                                           hparams.tgt_vocab_file,\n",
    "                                                                           share_vocab=False)\n",
    "\n",
    "        src_dataset = tf.data.TextLineDataset(hparams.src_file)\n",
    "        tgt_dataset = tf.data.TextLineDataset(hparams.tgt_file)\n",
    "        skip_count_placeholder = tf.placeholder(shape=(), dtype=tf.int64)\n",
    "\n",
    "        iterator = iterator_utils.get_iterator(src_dataset, tgt_dataset, src_vocab_table, tgt_vocab_table,\n",
    "                                               batch_size=hparams.batch_size,\n",
    "                                               sos=hparams.sos,\n",
    "                                               eos=hparams.eos,\n",
    "                                               random_seed=None,\n",
    "                                               num_buckets=hparams.num_buckets,\n",
    "                                               src_max_len=hparams.src_max_len,\n",
    "                                               tgt_max_len=hparams.tgt_max_len,\n",
    "                                               skip_count=skip_count_placeholder)\n",
    "        final_model = model.Model(hparams,\n",
    "                                  iterator=iterator,\n",
    "                                  mode=tf.contrib.learn.ModeKeys.TRAIN,\n",
    "                                  source_vocab_table=src_vocab_table,\n",
    "                                  target_vocab_table=tgt_vocab_table,\n",
    "                                  scope=scope)\n",
    "\n",
    "    return TrainGraph(graph=graph, model=final_model, iterator=iterator, skip_count_placeholder=skip_count_placeholder)\n",
    "\n",
    "\n",
    "train_graph = create_train_graph()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We now do a very similar creation of the evaluation graph below."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "# creating eval graph ...\n",
      "  num_bi_layers = 2, num_bi_residual_layers=0\n",
      "  cell 0  LSTM, forget_bias=1  DeviceWrapper, device=/cpu:0\n",
      "  cell 1  LSTM, forget_bias=1  DeviceWrapper, device=/cpu:0\n",
      "  cell 0  LSTM, forget_bias=1  DeviceWrapper, device=/cpu:0\n",
      "  cell 1  LSTM, forget_bias=1  DeviceWrapper, device=/cpu:0\n",
      "  cell 0  LSTM, forget_bias=1  DeviceWrapper, device=/cpu:0\n",
      "  cell 1  LSTM, forget_bias=1  DeviceWrapper, device=/cpu:0\n",
      "  cell 2  LSTM, forget_bias=1  DeviceWrapper, device=/cpu:0\n",
      "  cell 3  LSTM, forget_bias=1  DeviceWrapper, device=/cpu:0\n",
      "# Trainable variables\n",
      "  embeddings/encoder/embedding_encoder:0, (10000, 1024), /device:GPU:0\n",
      "  embeddings/decoder/embedding_decoder:0, (10000, 1024), /device:GPU:0\n",
      "  dynamic_seq2seq/encoder/bidirectional_rnn/fw/multi_rnn_cell/cell_0/basic_lstm_cell/kernel:0, (2048, 4096), /device:CPU:0\n",
      "  dynamic_seq2seq/encoder/bidirectional_rnn/fw/multi_rnn_cell/cell_0/basic_lstm_cell/bias:0, (4096,), /device:CPU:0\n",
      "  dynamic_seq2seq/encoder/bidirectional_rnn/fw/multi_rnn_cell/cell_1/basic_lstm_cell/kernel:0, (2048, 4096), /device:CPU:0\n",
      "  dynamic_seq2seq/encoder/bidirectional_rnn/fw/multi_rnn_cell/cell_1/basic_lstm_cell/bias:0, (4096,), /device:CPU:0\n",
      "  dynamic_seq2seq/encoder/bidirectional_rnn/bw/multi_rnn_cell/cell_0/basic_lstm_cell/kernel:0, (2048, 4096), /device:CPU:0\n",
      "  dynamic_seq2seq/encoder/bidirectional_rnn/bw/multi_rnn_cell/cell_0/basic_lstm_cell/bias:0, (4096,), /device:CPU:0\n",
      "  dynamic_seq2seq/encoder/bidirectional_rnn/bw/multi_rnn_cell/cell_1/basic_lstm_cell/kernel:0, (2048, 4096), /device:CPU:0\n",
      "  dynamic_seq2seq/encoder/bidirectional_rnn/bw/multi_rnn_cell/cell_1/basic_lstm_cell/bias:0, (4096,), /device:CPU:0\n",
      "  dynamic_seq2seq/decoder/multi_rnn_cell/cell_0/basic_lstm_cell/kernel:0, (2048, 4096), /device:CPU:0\n",
      "  dynamic_seq2seq/decoder/multi_rnn_cell/cell_0/basic_lstm_cell/bias:0, (4096,), /device:CPU:0\n",
      "  dynamic_seq2seq/decoder/multi_rnn_cell/cell_1/basic_lstm_cell/kernel:0, (2048, 4096), /device:CPU:0\n",
      "  dynamic_seq2seq/decoder/multi_rnn_cell/cell_1/basic_lstm_cell/bias:0, (4096,), /device:CPU:0\n",
      "  dynamic_seq2seq/decoder/multi_rnn_cell/cell_2/basic_lstm_cell/kernel:0, (2048, 4096), /device:CPU:0\n",
      "  dynamic_seq2seq/decoder/multi_rnn_cell/cell_2/basic_lstm_cell/bias:0, (4096,), /device:CPU:0\n",
      "  dynamic_seq2seq/decoder/multi_rnn_cell/cell_3/basic_lstm_cell/kernel:0, (2048, 4096), /device:CPU:0\n",
      "  dynamic_seq2seq/decoder/multi_rnn_cell/cell_3/basic_lstm_cell/bias:0, (4096,), /device:CPU:0\n",
      "  dynamic_seq2seq/decoder/output_projection/kernel:0, (1024, 10000), \n"
     ]
    }
   ],
   "source": [
    "# Create the evaluation graph\n",
    "class EvalGraph(collections.namedtuple(\"EvalGraph\", (\"graph\", \"model\", \"src_file_placeholder\", \"tgt_file_placeholder\",\n",
    "                                                     \"iterator\"))):\n",
    "    pass\n",
    "\n",
    "\n",
    "def create_eval_graph(scope=None):\n",
    "    graph = tf.Graph()\n",
    "\n",
    "    with graph.as_default():\n",
    "        src_vocab_table, tgt_vocab_table = vocab_utils.create_vocab_tables(\n",
    "            hparams.src_vocab_file, hparams.tgt_vocab_file, hparams.share_vocab)\n",
    "        src_file_placeholder = tf.placeholder(shape=(), dtype=tf.string)\n",
    "        tgt_file_placeholder = tf.placeholder(shape=(), dtype=tf.string)\n",
    "        src_dataset = tf.data.TextLineDataset(src_file_placeholder)\n",
    "        tgt_dataset = tf.data.TextLineDataset(tgt_file_placeholder)\n",
    "        iterator = iterator_utils.get_iterator(\n",
    "            src_dataset,\n",
    "            tgt_dataset,\n",
    "            src_vocab_table,\n",
    "            tgt_vocab_table,\n",
    "            hparams.batch_size,\n",
    "            sos=hparams.sos,\n",
    "            eos=hparams.eos,\n",
    "            random_seed=hparams.random_seed,\n",
    "            num_buckets=hparams.num_buckets,\n",
    "            src_max_len=hparams.src_max_len_infer,\n",
    "            tgt_max_len=hparams.tgt_max_len_infer)\n",
    "        final_model = model.Model(hparams,\n",
    "                                  iterator=iterator,\n",
    "                                  mode=tf.contrib.learn.ModeKeys.EVAL,\n",
    "                                  source_vocab_table=src_vocab_table,\n",
    "                                  target_vocab_table=tgt_vocab_table,\n",
    "                                  scope=scope)\n",
    "    return EvalGraph(graph=graph,\n",
    "                     model=final_model,\n",
    "                     src_file_placeholder=src_file_placeholder,\n",
    "                     tgt_file_placeholder=tgt_file_placeholder,\n",
    "                     iterator=iterator)\n",
    "\n",
    "\n",
    "eval_graph = create_eval_graph()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "And now the same for the inference graph:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "# creating infer graph ...\n",
      "  num_bi_layers = 2, num_bi_residual_layers=0\n",
      "  cell 0  LSTM, forget_bias=1  DeviceWrapper, device=/cpu:0\n",
      "  cell 1  LSTM, forget_bias=1  DeviceWrapper, device=/cpu:0\n",
      "  cell 0  LSTM, forget_bias=1  DeviceWrapper, device=/cpu:0\n",
      "  cell 1  LSTM, forget_bias=1  DeviceWrapper, device=/cpu:0\n",
      "  cell 0  LSTM, forget_bias=1  DeviceWrapper, device=/cpu:0\n",
      "  cell 1  LSTM, forget_bias=1  DeviceWrapper, device=/cpu:0\n",
      "  cell 2  LSTM, forget_bias=1  DeviceWrapper, device=/cpu:0\n",
      "  cell 3  LSTM, forget_bias=1  DeviceWrapper, device=/cpu:0\n",
      "# Trainable variables\n",
      "  embeddings/encoder/embedding_encoder:0, (10000, 1024), /device:GPU:0\n",
      "  embeddings/decoder/embedding_decoder:0, (10000, 1024), /device:GPU:0\n",
      "  dynamic_seq2seq/encoder/bidirectional_rnn/fw/multi_rnn_cell/cell_0/basic_lstm_cell/kernel:0, (2048, 4096), /device:CPU:0\n",
      "  dynamic_seq2seq/encoder/bidirectional_rnn/fw/multi_rnn_cell/cell_0/basic_lstm_cell/bias:0, (4096,), /device:CPU:0\n",
      "  dynamic_seq2seq/encoder/bidirectional_rnn/fw/multi_rnn_cell/cell_1/basic_lstm_cell/kernel:0, (2048, 4096), /device:CPU:0\n",
      "  dynamic_seq2seq/encoder/bidirectional_rnn/fw/multi_rnn_cell/cell_1/basic_lstm_cell/bias:0, (4096,), /device:CPU:0\n",
      "  dynamic_seq2seq/encoder/bidirectional_rnn/bw/multi_rnn_cell/cell_0/basic_lstm_cell/kernel:0, (2048, 4096), /device:CPU:0\n",
      "  dynamic_seq2seq/encoder/bidirectional_rnn/bw/multi_rnn_cell/cell_0/basic_lstm_cell/bias:0, (4096,), /device:CPU:0\n",
      "  dynamic_seq2seq/encoder/bidirectional_rnn/bw/multi_rnn_cell/cell_1/basic_lstm_cell/kernel:0, (2048, 4096), /device:CPU:0\n",
      "  dynamic_seq2seq/encoder/bidirectional_rnn/bw/multi_rnn_cell/cell_1/basic_lstm_cell/bias:0, (4096,), /device:CPU:0\n",
      "  dynamic_seq2seq/decoder/multi_rnn_cell/cell_0/basic_lstm_cell/kernel:0, (2048, 4096), /device:CPU:0\n",
      "  dynamic_seq2seq/decoder/multi_rnn_cell/cell_0/basic_lstm_cell/bias:0, (4096,), /device:CPU:0\n",
      "  dynamic_seq2seq/decoder/multi_rnn_cell/cell_1/basic_lstm_cell/kernel:0, (2048, 4096), /device:CPU:0\n",
      "  dynamic_seq2seq/decoder/multi_rnn_cell/cell_1/basic_lstm_cell/bias:0, (4096,), /device:CPU:0\n",
      "  dynamic_seq2seq/decoder/multi_rnn_cell/cell_2/basic_lstm_cell/kernel:0, (2048, 4096), /device:CPU:0\n",
      "  dynamic_seq2seq/decoder/multi_rnn_cell/cell_2/basic_lstm_cell/bias:0, (4096,), /device:CPU:0\n",
      "  dynamic_seq2seq/decoder/multi_rnn_cell/cell_3/basic_lstm_cell/kernel:0, (2048, 4096), /device:CPU:0\n",
      "  dynamic_seq2seq/decoder/multi_rnn_cell/cell_3/basic_lstm_cell/bias:0, (4096,), /device:CPU:0\n",
      "  dynamic_seq2seq/decoder/output_projection/kernel:0, (1024, 10000), \n"
     ]
    }
   ],
   "source": [
    "# Inference graph\n",
    "class InferGraph(\n",
    "    collections.namedtuple(\"InferGraph\", (\"graph\", \"model\", \"src_placeholder\", \"batch_size_placeholder\", \"iterator\"))):\n",
    "    pass\n",
    "\n",
    "\n",
    "def create_infer_graph(scope=None):\n",
    "    graph = tf.Graph()\n",
    "    with graph.as_default():\n",
    "        src_vocab_table, tgt_vocab_table = vocab_utils.create_vocab_tables(hparams.src_vocab_file,\n",
    "                                                                           hparams.tgt_vocab_file,\n",
    "                                                                           hparams.share_vocab)\n",
    "        reverse_tgt_vocab_table = lookup_ops.index_to_string_table_from_file(hparams.tgt_vocab_file,\n",
    "                                                                             default_value=vocab_utils.UNK)\n",
    "\n",
    "        src_placeholder = tf.placeholder(shape=[None], dtype=tf.string)\n",
    "        batch_size_placeholder = tf.placeholder(shape=[], dtype=tf.int64)\n",
    "        src_dataset = tf.data.Dataset.from_tensor_slices(src_placeholder)\n",
    "        iterator = iterator_utils.get_infer_iterator(src_dataset,\n",
    "                                                     src_vocab_table,\n",
    "                                                     batch_size=batch_size_placeholder,\n",
    "                                                     eos=hparams.eos,\n",
    "                                                     src_max_len=hparams.src_max_len_infer)\n",
    "        final_model = model.Model(hparams,\n",
    "                                  iterator=iterator,\n",
    "                                  mode=tf.contrib.learn.ModeKeys.INFER,\n",
    "                                  source_vocab_table=src_vocab_table,\n",
    "                                  target_vocab_table=tgt_vocab_table,\n",
    "                                  reverse_target_vocab_table=reverse_tgt_vocab_table,\n",
    "                                  scope=scope)\n",
    "    return InferGraph(graph=graph,\n",
    "                      model=final_model,\n",
    "                      src_placeholder=src_placeholder,\n",
    "                      batch_size_placeholder=batch_size_placeholder,\n",
    "                      iterator=iterator)\n",
    "\n",
    "\n",
    "infer_graph = create_infer_graph()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "For more illustrative output during training, we provide a short list of arbitrary source/target translations that will be output during training iterations."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[('hug me', 'drck mich'), ('goodbye', 'auf wiedersehen'), ('get lost', 'mach dich fort'), ('come over', 'komm hierher')]\n"
     ]
    }
   ],
   "source": [
    "# Create sample data for evaluation\n",
    "sample_ix = [25, 125, 240, 450]\n",
    "sample_src_data = [' '.join(english_sentence[x]) for x in sample_ix]\n",
    "sample_tgt_data = [' '.join(german_sentence[x]) for x in sample_ix]\n",
    "print([x for x in zip(sample_src_data, sample_tgt_data)])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Next we load the training graph."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "  created train model with fresh parameters, time 0.36s\n"
     ]
    }
   ],
   "source": [
    "config_proto = utils.get_config_proto()\n",
    "\n",
    "train_sess = tf.Session(config=config_proto, graph=train_graph.graph)\n",
    "eval_sess = tf.Session(config=config_proto, graph=eval_graph.graph)\n",
    "infer_sess = tf.Session(config=config_proto, graph=infer_graph.graph)\n",
    "\n",
    "# Load the training graph\n",
    "with train_graph.graph.as_default():\n",
    "    loaded_train_model, global_step = model_helper.create_or_load_model(train_graph.model,\n",
    "                                                                        hparams.out_dir,\n",
    "                                                                        train_sess,\n",
    "                                                                        \"train\")\n",
    "\n",
    "\n",
    "summary_writer = tf.summary.FileWriter(os.path.join(hparams.out_dir, 'Training'), train_graph.graph)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Load the evaluation graph:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "  created infer model with fresh parameters, time 0.27s\n",
      "  # 0\n",
      "    src: hug me\n",
      "    ref: drck mich\n",
      "    nmt: kaninchen kaninchen kaninchen notwendig\n",
      "  created eval model with fresh parameters, time 0.30s\n",
      "  eval dev: perplexity 10735.97, time 2s, Mon Jul 30 20:20:23 2018.\n",
      "  eval test: perplexity 10735.97, time 2s, Mon Jul 30 20:20:25 2018.\n",
      "  created infer model with fresh parameters, time 0.30s\n"
     ]
    }
   ],
   "source": [
    "for metric in hparams.metrics:\n",
    "    hparams.add_hparam(\"best_\" + metric, 0)\n",
    "    best_metric_dir = os.path.join(hparams.out_dir, \"best_\" + metric)\n",
    "    hparams.add_hparam(\"best_\" + metric + \"_dir\", best_metric_dir)\n",
    "    tf.gfile.MakeDirs(best_metric_dir)\n",
    "\n",
    "\n",
    "eval_output = train.run_full_eval(hparams.out_dir, infer_graph, infer_sess, eval_graph, eval_sess,\n",
    "                                  hparams, summary_writer, sample_src_data, sample_tgt_data)\n",
    "\n",
    "eval_results, _, acc_blue_scores = eval_output"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now we can initialize the training:\n",
    "\n",
    " - set the global training step.\n",
    " - initialize the training time.\n",
    " - initialize the training graph."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "# Start step 0, lr 1, Mon Jul 30 20:21:56 2018\n",
      "# Init train iterator, skipping 0 elements\n"
     ]
    }
   ],
   "source": [
    "# Training Initialization\n",
    "last_stats_step = global_step\n",
    "last_eval_step = global_step\n",
    "last_external_eval_step = global_step\n",
    "\n",
    "steps_per_eval = 10 * hparams.steps_per_stats\n",
    "steps_per_external_eval = 5 * steps_per_eval\n",
    "\n",
    "avg_step_time = 0.0\n",
    "step_time, checkpoint_loss, checkpoint_predict_count = 0.0, 0.0, 0.0\n",
    "checkpoint_total_count = 0.0\n",
    "speed, train_ppl = 0.0, 0.0\n",
    "\n",
    "utils.print_out(\"# Start step %d, lr %g, %s\" %\n",
    "                (global_step, loaded_train_model.learning_rate.eval(session=train_sess),\n",
    "                 time.ctime()))\n",
    "skip_count = hparams.batch_size * hparams.epoch_step\n",
    "utils.print_out(\"# Init train iterator, skipping %d elements\" % skip_count)\n",
    "\n",
    "train_sess.run(train_graph.iterator.initializer,\n",
    "              feed_dict={train_graph.skip_count_placeholder: skip_count})"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now we start the training!!  This may take a while to run (~ 12 hours run time on a Intel Core 1-7 CPU, 16GB RAM), but may run faster on a GPU setup."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Run training\n",
    "while global_step < hparams.num_train_steps:\n",
    "    start_time = time.time()\n",
    "    try:\n",
    "        step_result = loaded_train_model.train(train_sess)\n",
    "        (_, step_loss, step_predict_count, step_summary, global_step, step_word_count,\n",
    "         batch_size, __, ___) = step_result\n",
    "        hparams.epoch_step += 1\n",
    "    except tf.errors.OutOfRangeError:\n",
    "        # Next Epoch\n",
    "        hparams.epoch_step = 0\n",
    "        utils.print_out(\"# Finished an epoch, step %d. Perform external evaluation\" % global_step)\n",
    "        train.run_sample_decode(infer_graph,\n",
    "                                infer_sess,\n",
    "                                hparams.out_dir,\n",
    "                                hparams,\n",
    "                                summary_writer,\n",
    "                                sample_src_data,\n",
    "                                sample_tgt_data)\n",
    "        dev_scores, test_scores, _ = train.run_external_eval(infer_graph,\n",
    "                                                             infer_sess,\n",
    "                                                             hparams.out_dir,\n",
    "                                                             hparams,\n",
    "                                                             summary_writer)\n",
    "        train_sess.run(train_graph.iterator.initializer, feed_dict={train_graph.skip_count_placeholder: 0})\n",
    "        continue\n",
    "\n",
    "    summary_writer.add_summary(step_summary, global_step)\n",
    "\n",
    "    # Statistics\n",
    "    step_time += (time.time() - start_time)\n",
    "    checkpoint_loss += (step_loss * batch_size)\n",
    "    checkpoint_predict_count += step_predict_count\n",
    "    checkpoint_total_count += float(step_word_count)\n",
    "\n",
    "    # print statistics\n",
    "    if global_step - last_stats_step >= hparams.steps_per_stats:\n",
    "        last_stats_step = global_step\n",
    "        avg_step_time = step_time / hparams.steps_per_stats\n",
    "        train_ppl = utils.safe_exp(checkpoint_loss / checkpoint_predict_count)\n",
    "        speed = checkpoint_total_count / (1000 * step_time)\n",
    "\n",
    "        utils.print_out(\"  global step %d lr %g \"\n",
    "                        \"step-time %.2fs wps %.2fK ppl %.2f %s\" %\n",
    "                        (global_step,\n",
    "                         loaded_train_model.learning_rate.eval(session=train_sess),\n",
    "                         avg_step_time, speed, train_ppl, train._get_best_results(hparams)))\n",
    "\n",
    "        if math.isnan(train_ppl):\n",
    "            break\n",
    "\n",
    "        # Reset timer and loss.\n",
    "        step_time, checkpoint_loss, checkpoint_predict_count = 0.0, 0.0, 0.0\n",
    "        checkpoint_total_count = 0.0\n",
    "\n",
    "    if global_step - last_eval_step >= steps_per_eval:\n",
    "        last_eval_step = global_step\n",
    "        utils.print_out(\"# Save eval, global step %d\" % global_step)\n",
    "        utils.add_summary(summary_writer, global_step, \"train_ppl\", train_ppl)\n",
    "\n",
    "        # Save checkpoint\n",
    "        loaded_train_model.saver.save(train_sess, os.path.join(hparams.out_dir, \"translate.ckpt\"),\n",
    "                                      global_step=global_step)\n",
    "\n",
    "        # Evaluate on dev/test\n",
    "        train.run_sample_decode(infer_graph,\n",
    "                                infer_sess,\n",
    "                                out_dir,\n",
    "                                hparams,\n",
    "                                summary_writer,\n",
    "                                sample_src_data,\n",
    "                                sample_tgt_data)\n",
    "        dev_ppl, test_ppl = train.run_internal_eval(eval_graph,\n",
    "                                                    eval_sess,\n",
    "                                                    out_dir,\n",
    "                                                    hparams,\n",
    "                                                    summary_writer)\n",
    "\n",
    "    if global_step - last_external_eval_step >= steps_per_external_eval:\n",
    "        last_external_eval_step = global_step\n",
    "\n",
    "        # Save checkpoint\n",
    "        loaded_train_model.saver.save(train_sess, os.path.join(hparams.out_dir, \"translate.ckpt\"),\n",
    "                                      global_step=global_step)\n",
    "\n",
    "        train.run_sample_decode(infer_graph,\n",
    "                                infer_sess,\n",
    "                                out_dir,\n",
    "                                hparams,\n",
    "                                summary_writer,\n",
    "                                sample_src_data,\n",
    "                                sample_tgt_data)\n",
    "        dev_scores, test_scores, _ = train.run_external_eval(infer_graph,\n",
    "                                                             infer_sess,\n",
    "                                                             out_dir,\n",
    "                                                             hparams,\n",
    "                                                             summary_writer)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "You may look for similar output to the following:\n",
    "```\n",
    "  global step 102 lr 1 step-time 6.48s wps 0.24K ppl 1661.30 bleu 0.00\n",
    "  global step 202 lr 1 step-time 6.48s wps 0.25K ppl 282.66 bleu 0.00\n",
    "  global step 302 lr 1 step-time 6.71s wps 0.26K ppl 205.97 bleu 0.00\n",
    "  global step 402 lr 1 step-time 7.47s wps 0.24K ppl 170.30 bleu 0.00\n",
    "  global step 502 lr 1 step-time 7.51s wps 0.24K ppl 135.71 bleu 0.00\n",
    "  global step 602 lr 1 step-time 7.59s wps 0.24K ppl 116.17 bleu 0.00\n",
    "  global step 702 lr 1 step-time 7.55s wps 0.24K ppl 97.85 bleu 0.00\n",
    "  global step 802 lr 1 step-time 7.76s wps 0.24K ppl 86.67 bleu 0.00\n",
    "  global step 902 lr 1 step-time 7.94s wps 0.23K ppl 72.19 bleu 0.00\n",
    "  global step 1002 lr 1 step-time 7.75s wps 0.24K ppl 66.03 bleu 0.00\n",
    "# Save eval, global step 1002\n",
    "INFO:tensorflow:Restoring parameters from temp/seq2seq/nmt_out/translate.ckpt-1002\n",
    "2018-07-30 00:22:18.155019: I tensorflow/core/kernels/lookup_util.cc:373] Table trying to initialize from file vocab.en is already initialized.\n",
    "2018-07-30 00:22:18.155026: I tensorflow/core/kernels/lookup_util.cc:373] Table trying to initialize from file vocab.de is already initialized.\n",
    "2018-07-30 00:22:18.155053: I tensorflow/core/kernels/lookup_util.cc:373] Table trying to initialize from file vocab.de is already initialized.\n",
    "  loaded infer model parameters from temp/seq2seq/nmt_out/translate.ckpt-1002, time 0.31s\n",
    "  # 3\n",
    "    src: come over\n",
    "    ref: komm hierher\n",
    "    nmt: komm auf\n",
    "INFO:tensorflow:Restoring parameters from temp/seq2seq/nmt_out/translate.ckpt-1002\n",
    "2018-07-30 00:22:18.637235: I tensorflow/core/kernels/lookup_util.cc:373] Table trying to initialize from file vocab.de is already initialized.\n",
    "2018-07-30 00:22:18.637272: I tensorflow/core/kernels/lookup_util.cc:373] Table trying to initialize from file vocab.en is already initialized.\n",
    "  loaded eval model parameters from temp/seq2seq/nmt_out/translate.ckpt-1002, time 0.31s\n",
    "  eval dev: perplexity 68.29, time 2s, Mon Jul 30 00:22:21 2018.\n",
    "  eval test: perplexity 68.29, time 2s, Mon Jul 30 00:22:23 2018.\n",
    "  global step 1102 lr 1 step-time 7.60s wps 0.24K ppl 57.65 bleu 0.00\n",
    "  global step 1202 lr 1 step-time 7.77s wps 0.24K ppl 54.63 bleu 0.00\n",
    "  global step 1302 lr 1 step-time 7.86s wps 0.23K ppl 46.55 bleu 0.00\n",
    "# Finished an epoch, step 1332. Perform external evaluation\n",
    "INFO:tensorflow:Restoring parameters from temp/seq2seq/nmt_out/translate.ckpt-1002\n",
    "2018-07-30 01:04:55.469826: I tensorflow/core/kernels/lookup_util.cc:373] Table trying to initialize from file vocab.de is already initialized.\n",
    "2018-07-30 01:04:55.469826: I tensorflow/core/kernels/lookup_util.cc:373] Table trying to initialize from file vocab.de is already initialized.\n",
    "2018-07-30 01:04:55.469827: I tensorflow/core/kernels/lookup_util.cc:373] Table trying to initialize from file vocab.en is already initialized.\n",
    "  loaded infer model parameters from temp/seq2seq/nmt_out/translate.ckpt-1002, time 0.32s\n",
    "  # 2\n",
    "    src: get lost\n",
    "    ref: mach dich fort\n",
    "    nmt: <unk> sie\n",
    "INFO:tensorflow:Restoring parameters from temp/seq2seq/nmt_out/translate.ckpt-1002\n",
    "2018-07-30 01:04:55.921978: I tensorflow/core/kernels/lookup_util.cc:373] Table trying to initialize from file vocab.de is already initialized.\n",
    "  loaded infer model parameters from temp/seq2seq/nmt_out/translate.ckpt-1002, time 0.32s\n",
    "# External evaluation, global step 1002\n",
    "2018-07-30 01:04:55.921978: I tensorflow/core/kernels/lookup_util.cc:373] Table trying to initialize from file vocab.de is already initialized.\n",
    "2018-07-30 01:04:55.921980: I tensorflow/core/kernels/lookup_util.cc:373] Table trying to initialize from file vocab.en is already initialized.\n",
    "  decoding to output temp/seq2seq/nmt_out/output_dev.\n",
    "  done, num sentences 101, num translations per input 1, time 12s, Mon Jul 30 01:05:08 2018.\n",
    "  bleu dev: 0.0\n",
    "  saving hparams to temp/seq2seq/nmt_out/hparams\n",
    "# External evaluation, global step 1002\n",
    "  decoding to output temp/seq2seq/nmt_out/output_test.\n",
    "  done, num sentences 101, num translations per input 1, time 14s, Mon Jul 30 01:05:22 2018.\n",
    "  bleu test: 0.0\n",
    "  saving hparams to temp/seq2seq/nmt_out/hparams\n",
    "  global step 1402 lr 1 step-time 6.79s wps 0.23K ppl 31.64 bleu 0.00\n",
    "  global step 1502 lr 1 step-time 6.48s wps 0.25K ppl 26.35 bleu 0.00\n",
    "  global step 1602 lr 1 step-time 7.00s wps 0.24K ppl 26.87 bleu 0.00\n",
    "  global step 1702 lr 1 step-time 7.47s wps 0.24K ppl 30.74 bleu 0.00\n",
    "  global step 1802 lr 1 step-time 7.86s wps 0.23K ppl 31.18 bleu 0.00\n",
    "  global step 1902 lr 1 step-time 7.84s wps 0.24K ppl 30.12 bleu 0.00\n",
    "  global step 2002 lr 1 step-time 7.76s wps 0.23K ppl 26.81 bleu 0.00\n",
    "# Save eval, global step 2002\n",
    "INFO:tensorflow:Restoring parameters from temp/seq2seq/nmt_out/translate.ckpt-2002\n",
    "2018-07-30 02:26:54.592550: I tensorflow/core/kernels/lookup_util.cc:373] Table trying to initialize from file vocab.en is already initialized.\n",
    "2018-07-30 02:26:54.592550: I tensorflow/core/kernels/lookup_util.cc:373] Table trying to initialize from file vocab.de is already initialized.\n",
    "2018-07-30 02:26:54.592550: I tensorflow/core/kernels/lookup_util.cc:373] Table trying to initialize from file vocab.de is already initialized.\n",
    "  loaded infer model parameters from temp/seq2seq/nmt_out/translate.ckpt-2002, time 0.27s\n",
    "  # 2\n",
    "    src: get lost\n",
    "    ref: mach dich fort\n",
    "    nmt: <unk> dich\n",
    "INFO:tensorflow:Restoring parameters from temp/seq2seq/nmt_out/translate.ckpt-2002\n",
    "2018-07-30 02:26:55.015783: I tensorflow/core/kernels/lookup_util.cc:373] Table trying to initialize from file vocab.de is already initialized.\n",
    "2018-07-30 02:26:55.015783: I tensorflow/core/kernels/lookup_util.cc:373] Table trying to initialize from file vocab.en is already initialized.\n",
    "  loaded eval model parameters from temp/seq2seq/nmt_out/translate.ckpt-2002, time 0.26s\n",
    "  eval dev: perplexity 164.59, time 3s, Mon Jul 30 02:26:58 2018.\n",
    "  eval test: perplexity 164.59, time 3s, Mon Jul 30 02:27:01 2018.\n",
    "  global step 2102 lr 1 step-time 7.63s wps 0.24K ppl 25.65 bleu 0.00\n",
    "  global step 2202 lr 1 step-time 7.86s wps 0.23K ppl 25.78 bleu 0.00\n",
    "  global step 2302 lr 1 step-time 7.84s wps 0.23K ppl 23.49 bleu 0.00\n",
    "  global step 2402 lr 1 step-time 7.82s wps 0.24K ppl 23.29 bleu 0.00\n",
    "  global step 2502 lr 1 step-time 7.63s wps 0.24K ppl 20.36 bleu 0.00\n",
    "  global step 2602 lr 1 step-time 7.89s wps 0.23K ppl 20.28 bleu 0.00\n",
    "# Finished an epoch, step 2662. Perform external evaluation\n",
    "INFO:tensorflow:Restoring parameters from temp/seq2seq/nmt_out/translate.ckpt-2002\n",
    "  loaded infer model parameters from temp/seq2seq/nmt_out/translate.ckpt-2002, time 0.30s\n",
    "  # 2\n",
    "2018-07-30 03:52:25.999086: I tensorflow/core/kernels/lookup_util.cc:373] Table trying to initialize from file vocab.de is already initialized.\n",
    "2018-07-30 03:52:25.999092: I tensorflow/core/kernels/lookup_util.cc:373] Table trying to initialize from file vocab.de is already initialized.\n",
    "2018-07-30 03:52:25.999119: I tensorflow/core/kernels/lookup_util.cc:373] Table trying to initialize from file vocab.en is already initialized.\n",
    "    src: get lost\n",
    "    ref: mach dich fort\n",
    "    nmt: <unk> dich\n",
    "INFO:tensorflow:Restoring parameters from temp/seq2seq/nmt_out/translate.ckpt-2002\n",
    "2018-07-30 03:52:26.473327: I tensorflow/core/kernels/lookup_util.cc:373] Table trying to initialize from file vocab.de is already initialized.\n",
    "2018-07-30 03:52:26.473327: I tensorflow/core/kernels/lookup_util.cc:373] Table trying to initialize from file vocab.de is already initialized.\n",
    "2018-07-30 03:52:26.473331: I tensorflow/core/kernels/lookup_util.cc:373] Table trying to initialize from file vocab.en is already initialized.\n",
    "  loaded infer model parameters from temp/seq2seq/nmt_out/translate.ckpt-2002, time 0.30s\n",
    "# External evaluation, global step 2002\n",
    "  decoding to output temp/seq2seq/nmt_out/output_dev.\n",
    "  done, num sentences 101, num translations per input 1, time 16s, Mon Jul 30 03:52:42 2018.\n",
    "  bleu dev: 0.0\n",
    "  saving hparams to temp/seq2seq/nmt_out/hparams\n",
    "# External evaluation, global step 2002\n",
    "  decoding to output temp/seq2seq/nmt_out/output_test.\n",
    "  done, num sentences 101, num translations per input 1, time 16s, Mon Jul 30 03:52:59 2018.\n",
    "  bleu test: 0.0\n",
    "  saving hparams to temp/seq2seq/nmt_out/hparams\n",
    "  global step 2702 lr 1 step-time 7.29s wps 0.23K ppl 14.87 bleu 0.00\n",
    "  global step 2802 lr 0.5 step-time 6.50s wps 0.24K ppl 9.26 bleu 0.00\n",
    "  global step 2902 lr 0.5 step-time 6.65s wps 0.25K ppl 9.15 bleu 0.00\n",
    "  global step 3002 lr 0.25 step-time 7.27s wps 0.24K ppl 10.38 bleu 0.00\n",
    "# Save eval, global step 3002\n",
    "INFO:tensorflow:Restoring parameters from temp/seq2seq/nmt_out/translate.ckpt-3002\n",
    "2018-07-30 04:31:33.978539: I tensorflow/core/kernels/lookup_util.cc:373] Table trying to initialize from file vocab.en is already initialized.\n",
    "2018-07-30 04:31:33.978539: I tensorflow/core/kernels/lookup_util.cc:373] Table trying to initialize from file vocab.de is already initialized.\n",
    "2018-07-30 04:31:33.978539: I tensorflow/core/kernels/lookup_util.cc:373] Table trying to initialize from file vocab.de is already initialized.\n",
    "  loaded infer model parameters from temp/seq2seq/nmt_out/translate.ckpt-3002, time 0.27s\n",
    "  # 3\n",
    "    src: come over\n",
    "    ref: komm hierher\n",
    "    nmt: kommt\n",
    "INFO:tensorflow:Restoring parameters from temp/seq2seq/nmt_out/translate.ckpt-3002\n",
    "2018-07-30 04:31:34.377060: I tensorflow/core/kernels/lookup_util.cc:373] Table trying to initialize from file vocab.en is already initialized.\n",
    "2018-07-30 04:31:34.377101: I tensorflow/core/kernels/lookup_util.cc:373] Table trying to initialize from file vocab.de is already initialized.\n",
    "  loaded eval model parameters from temp/seq2seq/nmt_out/translate.ckpt-3002, time 0.27s\n",
    "  eval dev: perplexity 296.65, time 3s, Mon Jul 30 04:31:37 2018.\n",
    "  eval test: perplexity 296.65, time 2s, Mon Jul 30 04:31:40 2018.\n",
    "  global step 3102 lr 0.25 step-time 7.68s wps 0.24K ppl 10.89 bleu 0.00\n",
    "  global step 3202 lr 0.25 step-time 7.81s wps 0.24K ppl 11.07 bleu 0.00\n",
    "  global step 3302 lr 0.125 step-time 7.64s wps 0.24K ppl 9.78 bleu 0.00\n",
    "  global step 3402 lr 0.125 step-time 7.85s wps 0.24K ppl 10.30 bleu 0.00\n",
    "  global step 3502 lr 0.0625 step-time 7.76s wps 0.23K ppl 9.66 bleu 0.00\n",
    "  global step 3602 lr 0.0625 step-time 7.64s wps 0.24K ppl 9.43 bleu 0.00\n",
    "  global step 3702 lr 0.0625 step-time 7.83s wps 0.24K ppl 10.13 bleu 0.00\n",
    "  global step 3802 lr 0.03125 step-time 7.63s wps 0.24K ppl 9.35 bleu 0.00\n",
    "  global step 3902 lr 0.03125 step-time 7.65s wps 0.24K ppl 9.89 bleu 0.00\n",
    "# Finished an epoch, step 3992. Perform external evaluation\n",
    "INFO:tensorflow:Restoring parameters from temp/seq2seq/nmt_out/translate.ckpt-3002\n",
    "  loaded infer model parameters from temp/seq2seq/nmt_out/translate.ckpt-3002, time 0.33s\n",
    "  # 1\n",
    "2018-07-30 06:39:49.759226: I tensorflow/core/kernels/lookup_util.cc:373] Table trying to initialize from file vocab.en is already initialized.\n",
    "2018-07-30 06:39:49.759226: I tensorflow/core/kernels/lookup_util.cc:373] Table trying to initialize from file vocab.de is already initialized.\n",
    "2018-07-30 06:39:49.759226: I tensorflow/core/kernels/lookup_util.cc:373] Table trying to initialize from file vocab.de is already initialized.\n",
    "    src: goodbye\n",
    "    ref: auf wiedersehen\n",
    "    nmt: <unk>\n",
    "INFO:tensorflow:Restoring parameters from temp/seq2seq/nmt_out/translate.ckpt-3002\n",
    "2018-07-30 06:39:50.172528: I tensorflow/core/kernels/lookup_util.cc:373] Table trying to initialize from file vocab.en is already initialized.\n",
    "2018-07-30 06:39:50.172528: I tensorflow/core/kernels/lookup_util.cc:373] Table trying to initialize from file vocab.de is already initialized.\n",
    "  loaded infer model parameters from temp/seq2seq/nmt_out/translate.ckpt-3002, time 0.32s\n",
    "2018-07-30 06:39:50.172528: I tensorflow/core/kernels/lookup_util.cc:373] Table trying to initialize from file vocab.de is already initialized.\n",
    "# External evaluation, global step 3002\n",
    "  decoding to output temp/seq2seq/nmt_out/output_dev.\n",
    "  done, num sentences 101, num translations per input 1, time 14s, Mon Jul 30 06:40:05 2018.\n",
    "  bleu dev: 0.0\n",
    "  saving hparams to temp/seq2seq/nmt_out/hparams\n",
    "# External evaluation, global step 3002\n",
    "  decoding to output temp/seq2seq/nmt_out/output_test.\n",
    "  done, num sentences 101, num translations per input 1, time 13s, Mon Jul 30 06:40:19 2018.\n",
    "  bleu test: 0.0\n",
    "  saving hparams to temp/seq2seq/nmt_out/hparams\n",
    "  global step 4002 lr 0.015625 step-time 8.15s wps 0.22K ppl 9.38 bleu 0.00\n",
    "# Save eval, global step 4002\n",
    "INFO:tensorflow:Restoring parameters from temp/seq2seq/nmt_out/translate.ckpt-4002\n",
    "  loaded infer model parameters from temp/seq2seq/nmt_out/translate.ckpt-4002, time 0.29s\n",
    "2018-07-30 06:41:33.303050: I tensorflow/core/kernels/lookup_util.cc:373] Table trying to initialize from file vocab.en is already initialized.\n",
    "  # 1\n",
    "2018-07-30 06:41:33.303078: I tensorflow/core/kernels/lookup_util.cc:373] Table trying to initialize from file vocab.de is already initialized.\n",
    "2018-07-30 06:41:33.303080: I tensorflow/core/kernels/lookup_util.cc:373] Table trying to initialize from file vocab.de is already initialized.\n",
    "    src: goodbye\n",
    "    ref: auf wiedersehen\n",
    "    nmt: <unk>\n",
    "INFO:tensorflow:Restoring parameters from temp/seq2seq/nmt_out/translate.ckpt-4002\n",
    "2018-07-30 06:41:33.653274: I tensorflow/core/kernels/lookup_util.cc:373] Table trying to initialize from file vocab.en is already initialized.\n",
    "  loaded eval model parameters from temp/seq2seq/nmt_out/translate.ckpt-4002, time 0.26s\n",
    "2018-07-30 06:41:33.653296: I tensorflow/core/kernels/lookup_util.cc:373] Table trying to initialize from file vocab.de is already initialized.\n",
    "  eval dev: perplexity 342.19, time 3s, Mon Jul 30 06:41:36 2018.\n",
    "  eval test: perplexity 342.19, time 3s, Mon Jul 30 06:41:40 2018.\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.5"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
