{ "nbformat": 4, "nbformat_minor": 0, "metadata": { "accelerator": "GPU", "colab": { "name": "starter_notebook.ipynb", "provenance": [], "collapsed_sections": [], "toc_visible": true, "include_colab_link": true }, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.5.6" } }, "cells": [ { "cell_type": "markdown", "metadata": { "id": "view-in-github", "colab_type": "text" }, "source": [ "\"Open" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "Igc5itf-xMGj" }, "source": [ "# Masakhane - Machine Translation for African Languages (Using JoeyNMT)" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "x4fXCKCf36IK" }, "source": [ "## Note before beginning:\n", "### - The idea is that you should be able to make minimal changes to this in order to get SOME result for your own translation corpus. \n", "\n", "### - The tl;dr: Go to the **\"TODO\"** comments which will tell you what to update to get up and running\n", "\n", "### - If you actually want to have a clue what you're doing, read the text and peek at the links\n", "\n", "### - With 100 epochs, it should take around 7 hours to run in Google Colab\n", "\n", "### - Once you've gotten a result for your language, please attach and email your notebook that generated it to masakhanetranslation@gmail.com\n", "\n", "### - If you care enough and get a chance, doing a brief background on your language would be amazing. See examples in [(Martinus, 2019)](https://arxiv.org/abs/1906.05685)" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "l929HimrxS0a" }, "source": [ "## Retrieve your data & make a parallel corpus\n", "\n", "If you are wanting to use the JW300 data referenced on the Masakhane website or in our GitHub repo, you can use `opus-tools` to convert the data into a convenient format. `opus_read` from that package provides a convenient tool for reading the native aligned XML files and to convert them to TMX format. The tool can also be used to fetch relevant files from OPUS on the fly and to filter the data as necessary. [Read the documentation](https://pypi.org/project/opustools-pkg/) for more details.\n", "\n", "Once you have your corpus files in TMX format (an xml structure which will include the sentences in your target language and your source language in a single file), we recommend reading them into a pandas dataframe. Thankfully, Jade wrote a silly `tmx2dataframe` package which converts your tmx file to a pandas dataframe. " ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "oGRmDELn7Az0", "colab": { "base_uri": "https://localhost:8080/", "height": 122 }, "outputId": "a1a041fb-fee7-4cb1-ba15-a200ccc8748f" }, "source": [ "from google.colab import drive\n", "drive.mount('/content/drive')" ], "execution_count": 6, "outputs": [ { "output_type": "stream", "text": [ "Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%3awg%3aoauth%3a2.0%3aoob&response_type=code&scope=email%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdocs.test%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive.photos.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fpeopleapi.readonly\n", "\n", "Enter your authorization code:\n", "··········\n", "Mounted at /content/drive\n" ], "name": "stdout" } ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "Cn3tgQLzUxwn", "colab": {} }, "source": [ "# TODO: Set your source and target languages. Keep in mind, these traditionally use language codes as found here:\n", "# These will also become the suffix's of all vocab and corpus files used throughout\n", "import os\n", "source_language = \"en\"\n", "target_language = \"urh\" \n", "lc = False # If True, lowercase the data.\n", "seed = 42 # Random seed for shuffling.\n", "tag = \"baseline\" # Give a unique name to your folder - this is to ensure you don't rewrite any models you've already submitted\n", "\n", "os.environ[\"src\"] = source_language # Sets them in bash as well, since we often use bash scripts\n", "os.environ[\"tgt\"] = target_language\n", "os.environ[\"tag\"] = tag\n", "\n", "# This will save it to a folder in our gdrive instead!\n", "!mkdir -p \"/content/drive/My Drive/masakhane/$src-$tgt-$tag\"\n", "os.environ[\"gdrive_path\"] = \"/content/drive/My Drive/masakhane/%s-%s-%s\" % (source_language, target_language, tag)" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "kBSgJHEw7Nvx", "colab": { "base_uri": "https://localhost:8080/", "height": 34 }, "outputId": "6e2f809e-2323-4048-bf2e-2b0ca333de22" }, "source": [ "!echo $gdrive_path" ], "execution_count": 14, "outputs": [ { "output_type": "stream", "text": [ "/content/drive/My Drive/masakhane/en-urh-baseline\n" ], "name": "stdout" } ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "gA75Fs9ys8Y9", "colab": { "base_uri": "https://localhost:8080/", "height": 122 }, "outputId": "9e3a0967-5207-485d-bd66-b31881251543" }, "source": [ "# Install opus-tools\n", "! pip install opustools-pkg" ], "execution_count": 15, "outputs": [ { "output_type": "stream", "text": [ "Collecting opustools-pkg\n", "\u001b[?25l Downloading https://files.pythonhosted.org/packages/6c/9f/e829a0cceccc603450cd18e1ff80807b6237a88d9a8df2c0bb320796e900/opustools_pkg-0.0.52-py3-none-any.whl (80kB)\n", "\r\u001b[K |████ | 10kB 26.2MB/s eta 0:00:01\r\u001b[K |████████ | 20kB 5.8MB/s eta 0:00:01\r\u001b[K |████████████▏ | 30kB 8.4MB/s eta 0:00:01\r\u001b[K |████████████████▏ | 40kB 5.5MB/s eta 0:00:01\r\u001b[K |████████████████████▎ | 51kB 6.7MB/s eta 0:00:01\r\u001b[K |████████████████████████▎ | 61kB 7.9MB/s eta 0:00:01\r\u001b[K |████████████████████████████▎ | 71kB 9.1MB/s eta 0:00:01\r\u001b[K |████████████████████████████████| 81kB 5.1MB/s \n", "\u001b[?25hInstalling collected packages: opustools-pkg\n", "Successfully installed opustools-pkg-0.0.52\n" ], "name": "stdout" } ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "xq-tDZVks7ZD", "colab": { "base_uri": "https://localhost:8080/", "height": 204 }, "outputId": "530a6d7d-e08f-46d7-d731-9e6916b94a6e" }, "source": [ "# Downloading our corpus\n", "! opus_read -d JW300 -s $src -t $tgt -wm moses -w jw300.$src jw300.$tgt -q\n", "\n", "# extract the corpus file\n", "! gunzip JW300_latest_xml_$src-$tgt.xml.gz" ], "execution_count": 16, "outputs": [ { "output_type": "stream", "text": [ "\n", "Alignment file /proj/nlpl/data/OPUS/JW300/latest/xml/en-urh.xml.gz not found. The following files are available for downloading:\n", "\n", " 304 KB https://object.pouta.csc.fi/OPUS-JW300/v1/xml/en-urh.xml.gz\n", " 263 MB https://object.pouta.csc.fi/OPUS-JW300/v1/xml/en.zip\n", " 3 MB https://object.pouta.csc.fi/OPUS-JW300/v1/xml/urh.zip\n", "\n", " 267 MB Total size\n", "./JW300_latest_xml_en-urh.xml.gz ... 100% of 304 KB\n", "./JW300_latest_xml_en.zip ... 100% of 263 MB\n", "./JW300_latest_xml_urh.zip ... 100% of 3 MB\n" ], "name": "stdout" } ] }, { "cell_type": "code", "metadata": { "id": "n48GDRnP8y2G", "colab_type": "code", "colab": { "base_uri": "https://localhost:8080/", "height": 578 }, "outputId": "843b5e2c-d4bf-48be-9c5b-f72b91aa6956" }, "source": [ "# Download the global test set.\n", "! wget https://raw.githubusercontent.com/juliakreutzer/masakhane/master/jw300_utils/test/test.en-any.en\n", " \n", "# And the specific test set for this language pair.\n", "os.environ[\"trg\"] = target_language \n", "os.environ[\"src\"] = source_language \n", "\n", "! wget https://raw.githubusercontent.com/juliakreutzer/masakhane/master/jw300_utils/test/test.en-$trg.en \n", "! mv test.en-$trg.en test.en\n", "! wget https://raw.githubusercontent.com/juliakreutzer/masakhane/master/jw300_utils/test/test.en-$trg.$trg \n", "! mv test.en-$trg.$trg test.$trg" ], "execution_count": 17, "outputs": [ { "output_type": "stream", "text": [ "--2019-12-30 07:21:25-- https://raw.githubusercontent.com/juliakreutzer/masakhane/master/jw300_utils/test/test.en-any.en\n", "Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.0.133, 151.101.64.133, 151.101.128.133, ...\n", "Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.0.133|:443... connected.\n", "HTTP request sent, awaiting response... 200 OK\n", "Length: 277791 (271K) [text/plain]\n", "Saving to: ‘test.en-any.en’\n", "\n", "\rtest.en-any.en 0%[ ] 0 --.-KB/s \rtest.en-any.en 100%[===================>] 271.28K --.-KB/s in 0.02s \n", "\n", "2019-12-30 07:21:25 (12.4 MB/s) - ‘test.en-any.en’ saved [277791/277791]\n", "\n", "--2019-12-30 07:21:26-- https://raw.githubusercontent.com/juliakreutzer/masakhane/master/jw300_utils/test/test.en-urh.en\n", "Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.0.133, 151.101.64.133, 151.101.128.133, ...\n", "Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.0.133|:443... connected.\n", "HTTP request sent, awaiting response... 200 OK\n", "Length: 201504 (197K) [text/plain]\n", "Saving to: ‘test.en-urh.en’\n", "\n", "test.en-urh.en 100%[===================>] 196.78K --.-KB/s in 0.02s \n", "\n", "2019-12-30 07:21:27 (12.6 MB/s) - ‘test.en-urh.en’ saved [201504/201504]\n", "\n", "--2019-12-30 07:21:30-- https://raw.githubusercontent.com/juliakreutzer/masakhane/master/jw300_utils/test/test.en-urh.urh\n", "Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.0.133, 151.101.64.133, 151.101.128.133, ...\n", "Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.0.133|:443... connected.\n", "HTTP request sent, awaiting response... 200 OK\n", "Length: 236859 (231K) [text/plain]\n", "Saving to: ‘test.en-urh.urh’\n", "\n", "test.en-urh.urh 100%[===================>] 231.31K --.-KB/s in 0.02s \n", "\n", "2019-12-30 07:21:31 (14.6 MB/s) - ‘test.en-urh.urh’ saved [236859/236859]\n", "\n" ], "name": "stdout" } ] }, { "cell_type": "code", "metadata": { "id": "NqDG-CI28y2L", "colab_type": "code", "colab": { "base_uri": "https://localhost:8080/", "height": 34 }, "outputId": "34f01195-7463-4f08-ba17-e82b1ebe43b8" }, "source": [ "# Read the test data to filter from train and dev splits.\n", "# Store english portion in set for quick filtering checks.\n", "en_test_sents = set()\n", "filter_test_sents = \"test.en-any.en\"\n", "j = 0\n", "with open(filter_test_sents) as f:\n", " for line in f:\n", " en_test_sents.add(line.strip())\n", " j += 1\n", "print('Loaded {} global test sentences to filter from the training/dev data.'.format(j))" ], "execution_count": 18, "outputs": [ { "output_type": "stream", "text": [ "Loaded 3571 global test sentences to filter from the training/dev data.\n" ], "name": "stdout" } ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "3CNdwLBCfSIl", "colab": { "base_uri": "https://localhost:8080/", "height": 159 }, "outputId": "a7756b3a-5e51-447c-e551-4daa855af385" }, "source": [ "import pandas as pd\n", "\n", "# TMX file to dataframe\n", "source_file = 'jw300.' + source_language\n", "target_file = 'jw300.' + target_language\n", "\n", "source = []\n", "target = []\n", "skip_lines = [] # Collect the line numbers of the source portion to skip the same lines for the target portion.\n", "with open(source_file) as f:\n", " for i, line in enumerate(f):\n", " # Skip sentences that are contained in the test set.\n", " if line.strip() not in en_test_sents:\n", " source.append(line.strip())\n", " else:\n", " skip_lines.append(i) \n", "with open(target_file) as f:\n", " for j, line in enumerate(f):\n", " # Only add to corpus if corresponding source was not skipped.\n", " if j not in skip_lines:\n", " target.append(line.strip())\n", " \n", "print('Loaded data and skipped {}/{} lines since contained in test set.'.format(len(skip_lines), i))\n", " \n", "df = pd.DataFrame(zip(source, target), columns=['source_sentence', 'target_sentence'])\n", "# if you get TypeError: data argument can't be an iterator is because of your zip version run this below\n", "#df = pd.DataFrame(list(zip(source, target)), columns=['source_sentence', 'target_sentence'])\n", "df.head(3)" ], "execution_count": 19, "outputs": [ { "output_type": "stream", "text": [ "Loaded data and skipped 4050/32709 lines since contained in test set.\n" ], "name": "stdout" }, { "output_type": "execute_result", "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
source_sentencetarget_sentence
0Why It Pays to Be Honest 6Erere Herọ Ra Vwọ Dia Ohwo rẹ Uyota 5
1The Bible Changes Lives7 Ovwan “ Jẹn Ẹguọnọ rẹ Iniọvo na Dje Ebuoebuo...
2Give Me Just One Year of Peace and Happiness 8...12 Jẹ ‘ Ẹse rẹ Ọghẹnẹ rẹ Unu se Gbe - e na , ’...
\n", "
" ], "text/plain": [ " source_sentence target_sentence\n", "0 Why It Pays to Be Honest 6 Erere Herọ Ra Vwọ Dia Ohwo rẹ Uyota 5\n", "1 The Bible Changes Lives 7 Ovwan “ Jẹn Ẹguọnọ rẹ Iniọvo na Dje Ebuoebuo...\n", "2 Give Me Just One Year of Peace and Happiness 8... 12 Jẹ ‘ Ẹse rẹ Ọghẹnẹ rẹ Unu se Gbe - e na , ’..." ] }, "metadata": { "tags": [] }, "execution_count": 19 } ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "YkuK3B4p2AkN" }, "source": [ "## Pre-processing and export\n", "\n", "It is generally a good idea to remove duplicate translations and conflicting translations from the corpus. In practice, these public corpora include some number of these that need to be cleaned.\n", "\n", "In addition we will split our data into dev/test/train and export to the filesystem." ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "M_2ouEOH1_1q", "colab": { "base_uri": "https://localhost:8080/", "height": 187 }, "outputId": "53d4eb6d-4446-4416-e9cc-57d91fae8f99" }, "source": [ "# drop duplicate translations\n", "df_pp = df.drop_duplicates()\n", "\n", "# drop conflicting translations\n", "# (this is optional and something that you might want to comment out \n", "# depending on the size of your corpus)\n", "df_pp.drop_duplicates(subset='source_sentence', inplace=True)\n", "df_pp.drop_duplicates(subset='target_sentence', inplace=True)\n", "\n", "# Shuffle the data to remove bias in dev set selection.\n", "df_pp = df_pp.sample(frac=1, random_state=seed).reset_index(drop=True)" ], "execution_count": 20, "outputs": [ { "output_type": "stream", "text": [ "/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:6: SettingWithCopyWarning: \n", "A value is trying to be set on a copy of a slice from a DataFrame\n", "\n", "See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n", " \n", "/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:7: SettingWithCopyWarning: \n", "A value is trying to be set on a copy of a slice from a DataFrame\n", "\n", "See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n", " import sys\n" ], "name": "stderr" } ] }, { "cell_type": "code", "metadata": { "id": "Z_1BwAApEtMk", "colab_type": "code", "colab": { "base_uri": "https://localhost:8080/", "height": 785 }, "outputId": "4c667e29-fd05-4952-c6fa-52bb75f50d8f" }, "source": [ "# Install fuzzy wuzzy to remove \"almost duplicate\" sentences in the\n", "# test and training sets.\n", "! pip install fuzzywuzzy\n", "! pip install python-Levenshtein\n", "import time\n", "from fuzzywuzzy import process\n", "import numpy as np\n", "\n", "# reset the index of the training set after previous filtering\n", "df_pp.reset_index(drop=False, inplace=True)\n", "\n", "# Remove samples from the training data set if they \"almost overlap\" with the\n", "# samples in the test set.\n", "\n", "# Filtering function. Adjust pad to narrow down the candidate matches to\n", "# within a certain length of characters of the given sample.\n", "def fuzzfilter(sample, candidates, pad):\n", " candidates = [x for x in candidates if len(x) <= len(sample)+pad and len(x) >= len(sample)-pad] \n", " if len(candidates) > 0:\n", " return process.extractOne(sample, candidates)[1]\n", " else:\n", " return np.nan\n", "\n", "# NOTE - This might run slow depending on the size of your training set. We are\n", "# printing some information to help you track how long it would take. \n", "scores = []\n", "start_time = time.time()\n", "for idx, row in df_pp.iterrows():\n", " scores.append(fuzzfilter(row['source_sentence'], list(en_test_sents), 5))\n", " if idx % 1000 == 0:\n", " hours, rem = divmod(time.time() - start_time, 3600)\n", " minutes, seconds = divmod(rem, 60)\n", " print(\"{:0>2}:{:0>2}:{:05.2f}\".format(int(hours),int(minutes),seconds), \"%0.2f percent complete\" % (100.0*float(idx)/float(len(df_pp))))\n", "\n", "# Filter out \"almost overlapping samples\"\n", "df_pp['scores'] = scores\n", "df_pp = df_pp[df_pp['scores'] < 95]" ], "execution_count": 21, "outputs": [ { "output_type": "stream", "text": [ "Collecting fuzzywuzzy\n", " Downloading https://files.pythonhosted.org/packages/d8/f1/5a267addb30ab7eaa1beab2b9323073815da4551076554ecc890a3595ec9/fuzzywuzzy-0.17.0-py2.py3-none-any.whl\n", "Installing collected packages: fuzzywuzzy\n", "Successfully installed fuzzywuzzy-0.17.0\n", "Collecting python-Levenshtein\n", "\u001b[?25l Downloading https://files.pythonhosted.org/packages/42/a9/d1785c85ebf9b7dfacd08938dd028209c34a0ea3b1bcdb895208bd40a67d/python-Levenshtein-0.12.0.tar.gz (48kB)\n", "\u001b[K |████████████████████████████████| 51kB 3.7MB/s \n", "\u001b[?25hRequirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from python-Levenshtein) (42.0.2)\n", "Building wheels for collected packages: python-Levenshtein\n", " Building wheel for python-Levenshtein (setup.py) ... \u001b[?25l\u001b[?25hdone\n", " Created wheel for python-Levenshtein: filename=python_Levenshtein-0.12.0-cp36-cp36m-linux_x86_64.whl size=144681 sha256=4537127d21a3fdba6d4bd3bf7e6d305ab4f32d450b996f04a76f46e47d7cf10c\n", " Stored in directory: /root/.cache/pip/wheels/de/c2/93/660fd5f7559049268ad2dc6d81c4e39e9e36518766eaf7e342\n", "Successfully built python-Levenshtein\n", "Installing collected packages: python-Levenshtein\n", "Successfully installed python-Levenshtein-0.12.0\n", "00:00:00.04 0.00 percent complete\n", "00:00:21.31 3.74 percent complete\n", "00:00:42.14 7.49 percent complete\n", "00:01:04.35 11.23 percent complete\n", "00:01:26.20 14.98 percent complete\n", "00:01:47.40 18.72 percent complete\n", "00:02:08.42 22.46 percent complete\n", "00:02:29.34 26.21 percent complete\n", "00:02:49.96 29.95 percent complete\n", "00:03:11.20 33.70 percent complete\n", "00:03:32.13 37.44 percent complete\n", "00:03:54.79 41.18 percent complete\n", "00:04:15.89 44.93 percent complete\n", "00:04:36.32 48.67 percent complete\n", "00:04:57.28 52.41 percent complete\n", "00:05:18.66 56.16 percent complete\n", "00:05:40.21 59.90 percent complete\n" ], "name": "stdout" }, { "output_type": "stream", "text": [ "WARNING:root:Applied processor reduces input query to empty string, all comparisons will have score 0. [Query: '']\n" ], "name": "stderr" }, { "output_type": "stream", "text": [ "00:06:01.44 63.65 percent complete\n", "00:06:22.21 67.39 percent complete\n", "00:06:45.07 71.13 percent complete\n", "00:07:05.83 74.88 percent complete\n", "00:07:26.54 78.62 percent complete\n", "00:07:47.67 82.37 percent complete\n", "00:08:08.28 86.11 percent complete\n" ], "name": "stdout" }, { "output_type": "stream", "text": [ "WARNING:root:Applied processor reduces input query to empty string, all comparisons will have score 0. [Query: '*']\n" ], "name": "stderr" }, { "output_type": "stream", "text": [ "00:08:29.51 89.85 percent complete\n", "00:08:50.49 93.60 percent complete\n", "00:09:11.91 97.34 percent complete\n" ], "name": "stdout" } ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "hxxBOCA-xXhy", "colab": { "base_uri": "https://localhost:8080/", "height": 819 }, "outputId": "d09cb622-543e-4cba-8bc4-f1a323dbdbcb" }, "source": [ "# This section does the split between train/dev for the parallel corpora then saves them as separate files\n", "# We use 1000 dev test and the given test set.\n", "import csv\n", "\n", "# Do the split between dev/train and create parallel corpora\n", "num_dev_patterns = 1000\n", "\n", "# Optional: lower case the corpora - this will make it easier to generalize, but without proper casing.\n", "if lc: # Julia: making lowercasing optional\n", " df_pp[\"source_sentence\"] = df_pp[\"source_sentence\"].str.lower()\n", " df_pp[\"target_sentence\"] = df_pp[\"target_sentence\"].str.lower()\n", "\n", "# Julia: test sets are already generated\n", "dev = df_pp.tail(num_dev_patterns) # Herman: Error in original\n", "stripped = df_pp.drop(df_pp.tail(num_dev_patterns).index)\n", "\n", "with open(\"train.\"+source_language, \"w\") as src_file, open(\"train.\"+target_language, \"w\") as trg_file:\n", " for index, row in stripped.iterrows():\n", " src_file.write(row[\"source_sentence\"]+\"\\n\")\n", " trg_file.write(row[\"target_sentence\"]+\"\\n\")\n", " \n", "with open(\"dev.\"+source_language, \"w\") as src_file, open(\"dev.\"+target_language, \"w\") as trg_file:\n", " for index, row in dev.iterrows():\n", " src_file.write(row[\"source_sentence\"]+\"\\n\")\n", " trg_file.write(row[\"target_sentence\"]+\"\\n\")\n", "\n", "#stripped[[\"source_sentence\"]].to_csv(\"train.\"+source_language, header=False, index=False) # Herman: Added `header=False` everywhere\n", "#stripped[[\"target_sentence\"]].to_csv(\"train.\"+target_language, header=False, index=False) # Julia: Problematic handling of quotation marks.\n", "\n", "#dev[[\"source_sentence\"]].to_csv(\"dev.\"+source_language, header=False, index=False)\n", "#dev[[\"target_sentence\"]].to_csv(\"dev.\"+target_language, header=False, index=False)\n", "\n", "# Doublecheck the format below. There should be no extra quotation marks or weird characters.\n", "! head train.*\n", "! head dev.*" ], "execution_count": 22, "outputs": [ { "output_type": "stream", "text": [ "==> train.en <==\n", "The number of publishers is now about ten times what it was when I began serving here .\n", "Similarly , elders should not only encourage and console their brothers with words but also build them up by showing sincere personal interest . ​ — 1 Cor .\n", "17 Why We “ Keep Bearing Much Fruit ”\n", "Now I have a journal that I keep on my desk to schedule upcoming work , and this helps me to schedule myself , not leaving things till the last minute . ”\n", "1 , 2 . ( a ) How do some react to the thought that God has an organization ?\n", "We cannot go to the point of disobeying God or violating our Christian neutrality . ​ — Read 1 Peter 2 : 13 - 17 .\n", "Did this mean freedom for every literal slave ?\n", "Eventually , all my siblings did so and became Jehovah’s Witnesses .\n", "How pleased Jehovah will be as he observes our whole - souled efforts to “ keep bearing much fruit ” !\n", "Joseph , though , was a disciple , but he could not bring himself to say so openly .\n", "\n", "==> train.urh <==\n", "Ighwoghwota rehẹ ẹkuotọ na enẹna vwẹ ọhwọhwọ ihwe vwo bun vrẹ obo rọ hepha ọke me vwọ ga vwẹ oboyin .\n", "( 2 Kọr . 12 : 15 ) Vwẹ idjerhe vuọvo na , vwọ vrẹ ota rẹ unu rẹ ekpako cha vwọ bọn iniọvo na gan , o ji fo nẹ ayen ru obo ro che djephia nẹ ayen vwo ọdavwẹ rayen . ​ — 1 Kọr .\n", "17 Oboresorọ O Vwo Fo Nẹ A “ Mọ Ibi Buebu ”\n", "Asaọkiephana , mi vwo ẹbe rẹ mi si ọrhuẹrẹphiyotọ mẹ phiyọ , ọnana vwẹ ukẹcha kẹ vwẹ vwọ nabọ vwẹrote iruo mẹ , me rha yanjẹ ọvuọvo vwo hẹrhẹ imibrẹro ri chekọ bẹsiẹ ọke na vwo re - e . ”\n", "1 , 2 . ( a ) Die yen ihwo evo ta siẹrẹ ayen de nyo nẹ Ọghẹnẹ vwo ukoko ?\n", "Avwanre cha sa churhi rẹ Ọghẹnẹ fikirẹ aye - en yẹrẹ dia ẹbẹre ọvo rẹ akpọ na - a . — Se 1 Pita 2 : 13 - 17 .\n", "( Luk 4 : 18 ) Ọnana mudiaphiyọ egbomọphẹ vwọ kẹ ihwo re mu kpo eviẹn ?\n", "Ukuotọ rọyen , iniọvo mẹ eje de yono Baibol ji bromaphiyame kerẹ Iseri rẹ Jihova .\n", "O muẹro dẹn nẹ oma nabọ vwerhen Jihova kọke kọke rọ da mrẹ oborẹ avwanre davwan te , ra vwọ “ mọ ibi buebu ” !\n", "Ẹkẹvuọvo , Josẹf ọyen odibo rẹ Jesu ro se dje oma phia vwẹ azagba - a .\n", "==> dev.en <==\n", "These orchestral arrangements are composed in such a way that they will prepare our heart and mind for the program to follow .\n", "Today he is serving at Bethel .\n", "But freedom from what ?\n", "Avoid comparing your new congregation with your previous one .\n", "2 : 16 , 17 .\n", "As stated , the vindication of Jehovah’s sovereignty is a vital issue involving mankind .\n", "That is especially so if our treacherous heart tugs us in the opposite direction .\n", "At times , this resulted in more money going out than coming in for a period of time .\n", "How did hope reinforce Noah’s faith ?\n", "What prompts a mother to care tenderly for her newborn baby ?\n", "\n", "==> dev.urh <==\n", "E ru uhworo nana vwẹ idjerhe ro de se muegbe rẹ ubiudu rẹ avwanre hẹrhẹ ọrhuẹrẹphiyọ rẹ ẹdẹ yena .\n", "Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "Ẹkẹvuọvo , ẹdia vọ yen egbomọphẹ na che si ayen nu ?\n", "Wọ vwẹ ukoko kpokpọ na vwọ vwanvwen ọ rẹ wo nurhe na - a .\n", "2 : 16 , 17 .\n", "Kirobo ra tarọ jovwo , etito rẹ usuon rẹ Jihova , ọyen ota ọghanghanre ro fori nẹ ihworakpọ tẹnrovi .\n", "Ma rho , udu avwanre rọ vọnre vẹ ophiẹnvwe na da vuẹ avwanre nẹ e ru obo re chọre .\n", "Iruo kpokpọ nana nẹrhẹ a ghwọrọ vrẹ obo re torori ọkiọvo .\n", "Mavọ yen iphiẹrophiyọ vwọ nẹrhẹ esegbuyota rẹ Noa ganphiyọ ?\n", "Die yen mu oni vwọ vwẹrote ọmọ ro ghwe vwiẹ ?\n" ], "name": "stdout" } ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "epeCydmCyS8X" }, "source": [ "\n", "\n", "---\n", "\n", "\n", "## Installation of JoeyNMT\n", "\n", "JoeyNMT is a simple, minimalist NMT package which is useful for learning and teaching. Check out the documentation for JoeyNMT [here](https://joeynmt.readthedocs.io) " ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "iBRMm4kMxZ8L", "colab": { "base_uri": "https://localhost:8080/", "height": 1000 }, "outputId": "0b7de6db-71f8-461f-fd32-b57ad40081ce" }, "source": [ "# Install JoeyNMT\n", "! git clone https://github.com/joeynmt/joeynmt.git\n", "! cd joeynmt; pip3 install ." ], "execution_count": 23, "outputs": [ { "output_type": "stream", "text": [ "Cloning into 'joeynmt'...\n", "remote: Enumerating objects: 15, done.\u001b[K\n", "remote: Counting objects: 100% (15/15), done.\u001b[K\n", "remote: Compressing objects: 100% (12/12), done.\u001b[K\n", "remote: Total 2199 (delta 4), reused 5 (delta 3), pack-reused 2184\u001b[K\n", "Receiving objects: 100% (2199/2199), 2.60 MiB | 4.31 MiB/s, done.\n", "Resolving deltas: 100% (1525/1525), done.\n", "Processing /content/joeynmt\n", "Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from joeynmt==0.0.1) (0.16.0)\n", "Requirement already satisfied: pillow in /usr/local/lib/python3.6/dist-packages (from joeynmt==0.0.1) (4.3.0)\n", "Requirement already satisfied: numpy<2.0,>=1.14.5 in /usr/local/lib/python3.6/dist-packages (from joeynmt==0.0.1) (1.17.4)\n", "Requirement already satisfied: setuptools>=41.0.0 in /usr/local/lib/python3.6/dist-packages (from joeynmt==0.0.1) (42.0.2)\n", "Requirement already satisfied: torch>=1.1 in /usr/local/lib/python3.6/dist-packages (from joeynmt==0.0.1) (1.3.1)\n", "Requirement already satisfied: tensorflow>=1.14 in /usr/local/lib/python3.6/dist-packages (from joeynmt==0.0.1) (1.15.0)\n", "Requirement already satisfied: torchtext in /usr/local/lib/python3.6/dist-packages (from joeynmt==0.0.1) (0.3.1)\n", "Collecting sacrebleu>=1.3.6\n", " Downloading https://files.pythonhosted.org/packages/45/31/1a135b964c169984b27fb2f7a50280fa7f8e6d9d404d8a9e596180487fd1/sacrebleu-1.4.3-py3-none-any.whl\n", "Collecting subword-nmt\n", " Downloading https://files.pythonhosted.org/packages/74/60/6600a7bc09e7ab38bc53a48a20d8cae49b837f93f5842a41fe513a694912/subword_nmt-0.3.7-py2.py3-none-any.whl\n", "Requirement already satisfied: matplotlib in /usr/local/lib/python3.6/dist-packages (from joeynmt==0.0.1) (3.1.2)\n", "Requirement already satisfied: seaborn in /usr/local/lib/python3.6/dist-packages (from joeynmt==0.0.1) (0.9.0)\n", "Collecting pyyaml>=5.1\n", "\u001b[?25l Downloading https://files.pythonhosted.org/packages/8d/c9/e5be955a117a1ac548cdd31e37e8fd7b02ce987f9655f5c7563c656d5dcb/PyYAML-5.2.tar.gz (265kB)\n", "\u001b[K |████████████████████████████████| 266kB 10.9MB/s \n", "\u001b[?25hCollecting pylint\n", "\u001b[?25l Downloading https://files.pythonhosted.org/packages/e9/59/43fc36c5ee316bb9aeb7cf5329cdbdca89e5749c34d5602753827c0aa2dc/pylint-2.4.4-py3-none-any.whl (302kB)\n", "\u001b[K |████████████████████████████████| 307kB 22.3MB/s \n", "\u001b[?25hRequirement already satisfied: six==1.12 in /usr/local/lib/python3.6/dist-packages (from joeynmt==0.0.1) (1.12.0)\n", "Requirement already satisfied: olefile in /usr/local/lib/python3.6/dist-packages (from pillow->joeynmt==0.0.1) (0.46)\n", "Requirement already satisfied: wrapt>=1.11.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14->joeynmt==0.0.1) (1.11.2)\n", "Requirement already satisfied: gast==0.2.2 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14->joeynmt==0.0.1) (0.2.2)\n", "Requirement already satisfied: absl-py>=0.7.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14->joeynmt==0.0.1) (0.8.1)\n", "Requirement already satisfied: opt-einsum>=2.3.2 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14->joeynmt==0.0.1) (3.1.0)\n", "Requirement already satisfied: tensorboard<1.16.0,>=1.15.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14->joeynmt==0.0.1) (1.15.0)\n", "Requirement already satisfied: astor>=0.6.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14->joeynmt==0.0.1) (0.8.1)\n", "Requirement already satisfied: tensorflow-estimator==1.15.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14->joeynmt==0.0.1) (1.15.1)\n", "Requirement already satisfied: grpcio>=1.8.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14->joeynmt==0.0.1) (1.15.0)\n", "Requirement already satisfied: wheel>=0.26 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14->joeynmt==0.0.1) (0.33.6)\n", "Requirement already satisfied: protobuf>=3.6.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14->joeynmt==0.0.1) (3.10.0)\n", "Requirement already satisfied: google-pasta>=0.1.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14->joeynmt==0.0.1) (0.1.8)\n", "Requirement already satisfied: keras-applications>=1.0.8 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14->joeynmt==0.0.1) (1.0.8)\n", "Requirement already satisfied: keras-preprocessing>=1.0.5 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14->joeynmt==0.0.1) (1.1.0)\n", "Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14->joeynmt==0.0.1) (1.1.0)\n", "Requirement already satisfied: tqdm in /usr/local/lib/python3.6/dist-packages (from torchtext->joeynmt==0.0.1) (4.28.1)\n", "Requirement already satisfied: requests in /usr/local/lib/python3.6/dist-packages (from torchtext->joeynmt==0.0.1) (2.21.0)\n", "Collecting portalocker\n", " Downloading https://files.pythonhosted.org/packages/91/db/7bc703c0760df726839e0699b7f78a4d8217fdc9c7fcb1b51b39c5a22a4e/portalocker-1.5.2-py2.py3-none-any.whl\n", "Requirement already satisfied: typing in /usr/local/lib/python3.6/dist-packages (from sacrebleu>=1.3.6->joeynmt==0.0.1) (3.6.6)\n", "Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->joeynmt==0.0.1) (2.4.5)\n", "Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->joeynmt==0.0.1) (1.1.0)\n", "Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib->joeynmt==0.0.1) (0.10.0)\n", "Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->joeynmt==0.0.1) (2.6.1)\n", "Requirement already satisfied: pandas>=0.15.2 in /usr/local/lib/python3.6/dist-packages (from seaborn->joeynmt==0.0.1) (0.25.3)\n", "Requirement already satisfied: scipy>=0.14.0 in /usr/local/lib/python3.6/dist-packages (from seaborn->joeynmt==0.0.1) (1.3.3)\n", "Collecting mccabe<0.7,>=0.6\n", " Downloading https://files.pythonhosted.org/packages/87/89/479dc97e18549e21354893e4ee4ef36db1d237534982482c3681ee6e7b57/mccabe-0.6.1-py2.py3-none-any.whl\n", "Collecting isort<5,>=4.2.5\n", "\u001b[?25l Downloading https://files.pythonhosted.org/packages/e5/b0/c121fd1fa3419ea9bfd55c7f9c4fedfec5143208d8c7ad3ce3db6c623c21/isort-4.3.21-py2.py3-none-any.whl (42kB)\n", "\u001b[K |████████████████████████████████| 51kB 8.6MB/s \n", "\u001b[?25hCollecting astroid<2.4,>=2.3.0\n", "\u001b[?25l Downloading https://files.pythonhosted.org/packages/ad/ae/86734823047962e7b8c8529186a1ac4a7ca19aaf1aa0c7713c022ef593fd/astroid-2.3.3-py3-none-any.whl (205kB)\n", "\u001b[K |████████████████████████████████| 215kB 25.4MB/s \n", "\u001b[?25hRequirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.6/dist-packages (from tensorboard<1.16.0,>=1.15.0->tensorflow>=1.14->joeynmt==0.0.1) (3.1.1)\n", "Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.6/dist-packages (from tensorboard<1.16.0,>=1.15.0->tensorflow>=1.14->joeynmt==0.0.1) (0.16.0)\n", "Requirement already satisfied: h5py in /usr/local/lib/python3.6/dist-packages (from keras-applications>=1.0.8->tensorflow>=1.14->joeynmt==0.0.1) (2.8.0)\n", "Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests->torchtext->joeynmt==0.0.1) (3.0.4)\n", "Requirement already satisfied: urllib3<1.25,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests->torchtext->joeynmt==0.0.1) (1.24.3)\n", "Requirement already satisfied: idna<2.9,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests->torchtext->joeynmt==0.0.1) (2.8)\n", "Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests->torchtext->joeynmt==0.0.1) (2019.11.28)\n", "Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.15.2->seaborn->joeynmt==0.0.1) (2018.9)\n", "Collecting typed-ast<1.5,>=1.4.0; implementation_name == \"cpython\" and python_version < \"3.8\"\n", "\u001b[?25l Downloading https://files.pythonhosted.org/packages/31/d3/9d1802c161626d0278bafb1ffb32f76b9d01e123881bbf9d91e8ccf28e18/typed_ast-1.4.0-cp36-cp36m-manylinux1_x86_64.whl (736kB)\n", "\u001b[K |████████████████████████████████| 737kB 26.4MB/s \n", "\u001b[?25hCollecting lazy-object-proxy==1.4.*\n", "\u001b[?25l Downloading https://files.pythonhosted.org/packages/0b/dd/b1e3407e9e6913cf178e506cd0dee818e58694d9a5cd1984e3f6a8b9a10f/lazy_object_proxy-1.4.3-cp36-cp36m-manylinux1_x86_64.whl (55kB)\n", "\u001b[K |████████████████████████████████| 61kB 10.5MB/s \n", "\u001b[?25hBuilding wheels for collected packages: joeynmt, pyyaml\n", " Building wheel for joeynmt (setup.py) ... \u001b[?25l\u001b[?25hdone\n", " Created wheel for joeynmt: filename=joeynmt-0.0.1-cp36-none-any.whl size=72136 sha256=77b08ba36545473f912885e7dd1849ba72a600b04598d2c533041a22c4b2334b\n", " Stored in directory: /tmp/pip-ephem-wheel-cache-irh9ejn6/wheels/db/01/db/751cc9f3e7f6faec127c43644ba250a3ea7ad200594aeda70a\n", " Building wheel for pyyaml (setup.py) ... \u001b[?25l\u001b[?25hdone\n", " Created wheel for pyyaml: filename=PyYAML-5.2-cp36-cp36m-linux_x86_64.whl size=44209 sha256=16060e8d79413b558dc0fa40774fb8228633e49bfeec8755092376bb9284cc7f\n", " Stored in directory: /root/.cache/pip/wheels/54/b7/c7/2ada654ee54483c9329871665aaf4a6056c3ce36f29cf66e67\n", "Successfully built joeynmt pyyaml\n", "Installing collected packages: portalocker, sacrebleu, subword-nmt, pyyaml, mccabe, isort, typed-ast, lazy-object-proxy, astroid, pylint, joeynmt\n", " Found existing installation: PyYAML 3.13\n", " Uninstalling PyYAML-3.13:\n", " Successfully uninstalled PyYAML-3.13\n", "Successfully installed astroid-2.3.3 isort-4.3.21 joeynmt-0.0.1 lazy-object-proxy-1.4.3 mccabe-0.6.1 portalocker-1.5.2 pylint-2.4.4 pyyaml-5.2 sacrebleu-1.4.3 subword-nmt-0.3.7 typed-ast-1.4.0\n" ], "name": "stdout" } ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "AaE77Tcppex9" }, "source": [ "# Preprocessing the Data into Subword BPE Tokens\n", "\n", "- One of the most powerful improvements for agglutinative languages (a feature of most Bantu languages) is using BPE tokenization [ (Sennrich, 2015) ](https://arxiv.org/abs/1508.07909).\n", "\n", "- It was also shown that by optimizing the umber of BPE codes we significantly improve results for low-resourced languages [(Sennrich, 2019)](https://www.aclweb.org/anthology/P19-1021) [(Martinus, 2019)](https://arxiv.org/abs/1906.05685)\n", "\n", "- Below we have the scripts for doing BPE tokenization of our data. We use 4000 tokens as recommended by [(Sennrich, 2019)](https://www.aclweb.org/anthology/P19-1021). You do not need to change anything. Simply running the below will be suitable. " ] }, { "cell_type": "code", "metadata": { "id": "0DhFg6tlqVW5", "colab_type": "code", "colab": {} }, "source": [ "!ls drive/'My Drive'/masakhane/en-urh-baseline/models/enurh_transformer" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "H-TyjtmXB1mL", "colab": { "base_uri": "https://localhost:8080/", "height": 408 }, "outputId": "a07315cb-83d5-4bc9-9a65-77c0ea7d9437" }, "source": [ "# ####IOHAVOC -->> WE DO NOT WANT TO DO BPE BY DEFAULT, we need two different configs, but generate vocab below\n", "\n", "# One of the huge boosts in NMT performance was to use a different method of tokenizing. \n", "# Usually, NMT would tokenize by words. However, using a method called BPE gave amazing boosts to performance\n", "\n", "# Do subword NMT \n", "from os import path\n", "os.environ[\"src\"] = source_language # Sets them in bash as well, since we often use bash scripts\n", "os.environ[\"tgt\"] = target_language\n", "\n", "os.environ[\"data_path\"] = path.join(\"joeynmt\", \"data\", source_language + target_language) # Herman! \n", "\n", "# Learn BPEs on the training data.\n", "! subword-nmt learn-joint-bpe-and-vocab --input train.$src train.$tgt -s 4000 -o bpe.codes.4000 --write-vocabulary vocab.$src vocab.$tgt\n", "\n", "# Apply BPE splits to the development and test data.\n", "! subword-nmt apply-bpe -c bpe.codes.4000 --vocabulary vocab.$src < train.$src > train.bpe.$src\n", "! subword-nmt apply-bpe -c bpe.codes.4000 --vocabulary vocab.$tgt < train.$tgt > train.bpe.$tgt\n", "\n", "! subword-nmt apply-bpe -c bpe.codes.4000 --vocabulary vocab.$src < dev.$src > dev.bpe.$src\n", "! subword-nmt apply-bpe -c bpe.codes.4000 --vocabulary vocab.$tgt < dev.$tgt > dev.bpe.$tgt\n", "! subword-nmt apply-bpe -c bpe.codes.4000 --vocabulary vocab.$src < test.$src > test.bpe.$src\n", "! subword-nmt apply-bpe -c bpe.codes.4000 --vocabulary vocab.$tgt < test.$tgt > test.bpe.$tgt\n", "\n", "# Create directory, move everyone we care about to the correct location\n", "! mkdir -p $data_path\n", "! cp train.* $data_path\n", "! cp test.* $data_path\n", "! cp dev.* $data_path\n", "! cp bpe.codes.4000 $data_path\n", "! ls $data_path\n", "\n", "# Also move everything we care about to a mounted location in google drive (relevant if running in colab) at gdrive_path\n", "! cp train.* \"$gdrive_path\"\n", "! cp test.* \"$gdrive_path\"\n", "! cp dev.* \"$gdrive_path\"\n", "! cp bpe.codes.4000 \"$gdrive_path\"\n", "! ls \"$gdrive_path\"\n", "\n", "# Create that vocab using build_vocab\n", "! sudo chmod 777 joeynmt/scripts/build_vocab.py\n", "! joeynmt/scripts/build_vocab.py joeynmt/data/$src$tgt/train.bpe.$src joeynmt/data/$src$tgt/train.bpe.$tgt --output_path joeynmt/data/$src$tgt/vocab.txt\n", "\n", "# Some output\n", "! echo \"BPE Urhobo Sentences\"\n", "! tail -n 5 test.bpe.$tgt\n", "! echo \"Combined BPE Vocab\"\n", "! tail -n 10 joeynmt/data/$src$tgt/vocab.txt # Herman" ], "execution_count": 25, "outputs": [ { "output_type": "stream", "text": [ "bpe.codes.4000\tdev.en\t test.bpe.urh test.urh\t train.en\n", "dev.bpe.en\tdev.urh test.en\t train.bpe.en train.urh\n", "dev.bpe.urh\ttest.bpe.en test.en-any.en train.bpe.urh\n", "bpe.codes.4000\tdev.en\t test.bpe.urh test.urh\t train.en\n", "dev.bpe.en\tdev.urh test.en\t train.bpe.en train.urh\n", "dev.bpe.urh\ttest.bpe.en test.en-any.en train.bpe.urh\n", "BPE Urhobo Sentences\n", "Diesorọ H@@ us@@ ha@@ i vwọ guọnọ uduefiogbere ọ sa vwọ fuevun kẹ Ọghẹnẹ ?\n", "Diesorọ ọ vwọ guọnọ uduefiogbere avwanre ke sa fuevun ?\n", "Me nẹrhovwo vwọ kẹ uduefiogbere me sa vwọ yọn@@ regan .\n", "E@@ nẹ@@ na , ẹwẹn rayen kpotọ re , me sa kọn bru ayen ra ọkieje . ” — Se Isẹ 29 : 25 .\n", "[ 1 ] ( ẹkorota 7 ) E wene e@@ dẹ evo .\n", "Combined BPE Vocab\n", "ording\n", "ople\n", "eignty\n", "ẹgbaẹ@@\n", "Heb@@\n", "sider\n", "Babil@@\n", "/@@\n", "rọvw@@\n", "Jihov@@\n" ], "name": "stdout" } ] }, { "cell_type": "code", "metadata": { "id": "kzp-2iLmyct4", "colab_type": "code", "colab": { "base_uri": "https://localhost:8080/", "height": 238 }, "outputId": "b084dd27-b59b-4ac8-caf9-e7432b69c317" }, "source": [ "! ls joeynmt/data/enurh\n", "! ls joeynmt/configs" ], "execution_count": 37, "outputs": [ { "output_type": "stream", "text": [ "bpe.codes.4000\tdev.urh test.en-any.en train.en\n", "dev.bpe.en\ttest.bpe.en test.urh\t train.urh\n", "dev.bpe.urh\ttest.bpe.urh train.bpe.en vocab-nonBPE.txt\n", "dev.en\t\ttest.en train.bpe.urh vocab.txt\n", "iwslt14_deen_bpe.yaml\t\t transformer_reverse.yaml\n", "iwslt_deen_bahdanau.yaml\t transformer_small.yaml\n", "iwslt_envi_luong.yaml\t\t transformer_wmt17_ende.yaml\n", "iwslt_envi_xnmt.yaml\t\t transformer_wmt17_lven.yaml\n", "reverse.yaml\t\t\t wmt_ende_best.yaml\n", "small.yaml\t\t\t wmt_ende_default.yaml\n", "transformer_copy.yaml\t\t wmt_lven_best.yaml\n", "transformer_enurh.yaml\t\t wmt_lven_default.yaml\n", "transformer_iwslt14_deen_bpe.yaml\n" ], "name": "stdout" } ] }, { "cell_type": "code", "metadata": { "id": "gRiUoc_ryUR8", "colab_type": "code", "colab": {} }, "source": [ "# ####IOHAVOC CREATE THE VOCAB FOR NON-BPE EXPERIMENTS\n", "! joeynmt/scripts/build_vocab.py joeynmt/data/$src$tgt/train.$src joeynmt/data/$src$tgt/train.$tgt --output_path joeynmt/data/$src$tgt/vocab-nonBPE.txt\n" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "IlMitUHR8Qy-", "colab": { "base_uri": "https://localhost:8080/", "height": 68 }, "outputId": "413aee12-3849-42f1-da3c-ef0ca5bc8086" }, "source": [ "# Also move everything we care about to a mounted location in google drive (relevant if running in colab) at gdrive_path\n", "! cp train.* \"$gdrive_path\"\n", "! cp test.* \"$gdrive_path\"\n", "! cp dev.* \"$gdrive_path\"\n", "! cp bpe.codes.4000 \"$gdrive_path\"\n", "! ls \"$gdrive_path\"" ], "execution_count": 36, "outputs": [ { "output_type": "stream", "text": [ "bpe.codes.4000\tdev.en\t test.bpe.urh test.urh\t train.en\n", "dev.bpe.en\tdev.urh test.en\t train.bpe.en train.urh\n", "dev.bpe.urh\ttest.bpe.en test.en-any.en train.bpe.urh\n" ], "name": "stdout" } ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "Ixmzi60WsUZ8" }, "source": [ "# Creating the JoeyNMT Config\n", "\n", "JoeyNMT requires a yaml config. We provide a template below. We've also set a number of defaults with it, that you may play with!\n", "\n", "- We used Transformer architecture \n", "- We set our dropout to reasonably high: 0.3 (recommended in [(Sennrich, 2019)](https://www.aclweb.org/anthology/P19-1021))\n", "\n", "Things worth playing with:\n", "- The batch size (also recommended to change for low-resourced languages)\n", "- The number of epochs (we've set it at 30 just so it runs in about an hour, for testing purposes)\n", "- The decoder options (beam_size, alpha)\n", "- Evaluation metrics (BLEU versus Crhf4)" ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "PIs1lY2hxMsl", "colab": {} }, "source": [ "# This creates the config file for our JoeyNMT system. It might seem overwhelming so we've provided a couple of useful parameters you'll need to update\n", "# (You can of course play with all the parameters if you'd like!)\n", "\n", "name = '%s%s' % (source_language, target_language)\n", "gdrive_path = os.environ[\"gdrive_path\"]\n", "\n", "# Create the config\n", "config = \"\"\"\n", "name: \"{name}_transformer\"\n", "\n", "data:\n", " src: \"{source_language}\"\n", " trg: \"{target_language}\"\n", " train: \"data/{name}/train.bpe\"\n", " dev: \"data/{name}/dev.bpe\"\n", " test: \"data/{name}/test.bpe\"\n", " level: \"bpe\"\n", " lowercase: False\n", " max_sent_length: 100\n", " src_vocab: \"data/{name}/vocab.txt\"\n", " trg_vocab: \"data/{name}/vocab.txt\"\n", "\n", "testing:\n", " beam_size: 5\n", " alpha: 1.0\n", "\n", "training:\n", " #load_model: \"{gdrive_path}/models/{name}_transformer/1.ckpt\" # if uncommented, load a pre-trained model from this checkpoint\n", " random_seed: 42\n", " optimizer: \"adam\"\n", " normalization: \"tokens\"\n", " adam_betas: [0.9, 0.999] \n", " scheduling: \"plateau\" # TODO: try switching from plateau to Noam scheduling\n", " patience: 5 # For plateau: decrease learning rate by decrease_factor if validation score has not improved for this many validation rounds.\n", " learning_rate_factor: 0.5 # factor for Noam scheduler (used with Transformer)\n", " learning_rate_warmup: 1000 # warmup steps for Noam scheduler (used with Transformer)\n", " decrease_factor: 0.7\n", " loss: \"crossentropy\"\n", " learning_rate: 0.0003\n", " learning_rate_min: 0.00000001\n", " weight_decay: 0.0\n", " label_smoothing: 0.1\n", " batch_size: 4096\n", " batch_type: \"token\"\n", " eval_batch_size: 3600\n", " eval_batch_type: \"token\"\n", " batch_multiplier: 1\n", " early_stopping_metric: \"ppl\"\n", " epochs: 150 # TODO: Decrease for when playing around and checking of working. Around 30 is sufficient to check if its working at all\n", " validation_freq: 1000 # TODO: Set to at least once per epoch.\n", " logging_freq: 100\n", " eval_metric: \"bleu\"\n", " model_dir: \"models/{name}_transformer\"\n", " overwrite: True # TODO: Set to True if you want to overwrite possibly existing models. \n", " shuffle: True\n", " use_cuda: True\n", " max_output_length: 100\n", " print_valid_sents: [0, 1, 2, 3]\n", " keep_last_ckpts: 3\n", "\n", "model:\n", " initializer: \"xavier\"\n", " bias_initializer: \"zeros\"\n", " init_gain: 1.0\n", " embed_initializer: \"xavier\"\n", " embed_init_gain: 1.0\n", " tied_embeddings: True\n", " tied_softmax: True\n", " encoder:\n", " type: \"transformer\"\n", " num_layers: 6\n", " num_heads: 4 # TODO: Increase to 8 for larger data.\n", " embeddings:\n", " embedding_dim: 256 # TODO: Increase to 512 for larger data.\n", " scale: True\n", " dropout: 0.2\n", " # typically ff_size = 4 x hidden_size\n", " hidden_size: 256 # TODO: Increase to 512 for larger data.\n", " ff_size: 1024 # TODO: Increase to 2048 for larger data.\n", " dropout: 0.3\n", " decoder:\n", " type: \"transformer\"\n", " num_layers: 6\n", " num_heads: 4 # TODO: Increase to 8 for larger data.\n", " embeddings:\n", " embedding_dim: 256 # TODO: Increase to 512 for larger data.\n", " scale: True\n", " dropout: 0.2\n", " # typically ff_size = 4 x hidden_size\n", " hidden_size: 256 # TODO: Increase to 512 for larger data.\n", " ff_size: 1024 # TODO: Increase to 2048 for larger data.\n", " dropout: 0.3\n", "\"\"\".format(name=name, gdrive_path=os.environ[\"gdrive_path\"], source_language=source_language, target_language=target_language)\n", "with open(\"joeynmt/configs/transformer_{name}.yaml\".format(name=name),'w') as f:\n", " f.write(config)" ], "execution_count": 0, "outputs": [] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "pIifxE3Qzuvs" }, "source": [ "# Train the Model\n", "\n", "This single line of joeynmt runs the training using the config we made above" ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "6ZBPFwT94WpI", "colab": { "base_uri": "https://localhost:8080/", "height": 1000 }, "outputId": "2d51fa39-c119-45f7-d9c1-ec5bdee1b2ec" }, "source": [ "# Train the model\n", "# You can press Ctrl-C to stop. And then run the next cell to save your checkpoints! \n", "!cd joeynmt; python3 -m joeynmt train configs/transformer_$src$tgt.yaml" ], "execution_count": 55, "outputs": [ { "output_type": "stream", "text": [ "2019-12-30 08:41:49,260 Hello! This is Joey-NMT.\n", "2019-12-30 08:41:50,622 Total params: 12119552\n", "2019-12-30 08:41:50,623 Trainable parameters: ['decoder.layer_norm.bias', 'decoder.layer_norm.weight', 'decoder.layers.0.dec_layer_norm.bias', 'decoder.layers.0.dec_layer_norm.weight', 'decoder.layers.0.feed_forward.layer_norm.bias', 'decoder.layers.0.feed_forward.layer_norm.weight', 'decoder.layers.0.feed_forward.pwff_layer.0.bias', 'decoder.layers.0.feed_forward.pwff_layer.0.weight', 'decoder.layers.0.feed_forward.pwff_layer.3.bias', 'decoder.layers.0.feed_forward.pwff_layer.3.weight', 'decoder.layers.0.src_trg_att.k_layer.bias', 'decoder.layers.0.src_trg_att.k_layer.weight', 'decoder.layers.0.src_trg_att.output_layer.bias', 'decoder.layers.0.src_trg_att.output_layer.weight', 'decoder.layers.0.src_trg_att.q_layer.bias', 'decoder.layers.0.src_trg_att.q_layer.weight', 'decoder.layers.0.src_trg_att.v_layer.bias', 'decoder.layers.0.src_trg_att.v_layer.weight', 'decoder.layers.0.trg_trg_att.k_layer.bias', 'decoder.layers.0.trg_trg_att.k_layer.weight', 'decoder.layers.0.trg_trg_att.output_layer.bias', 'decoder.layers.0.trg_trg_att.output_layer.weight', 'decoder.layers.0.trg_trg_att.q_layer.bias', 'decoder.layers.0.trg_trg_att.q_layer.weight', 'decoder.layers.0.trg_trg_att.v_layer.bias', 'decoder.layers.0.trg_trg_att.v_layer.weight', 'decoder.layers.0.x_layer_norm.bias', 'decoder.layers.0.x_layer_norm.weight', 'decoder.layers.1.dec_layer_norm.bias', 'decoder.layers.1.dec_layer_norm.weight', 'decoder.layers.1.feed_forward.layer_norm.bias', 'decoder.layers.1.feed_forward.layer_norm.weight', 'decoder.layers.1.feed_forward.pwff_layer.0.bias', 'decoder.layers.1.feed_forward.pwff_layer.0.weight', 'decoder.layers.1.feed_forward.pwff_layer.3.bias', 'decoder.layers.1.feed_forward.pwff_layer.3.weight', 'decoder.layers.1.src_trg_att.k_layer.bias', 'decoder.layers.1.src_trg_att.k_layer.weight', 'decoder.layers.1.src_trg_att.output_layer.bias', 'decoder.layers.1.src_trg_att.output_layer.weight', 'decoder.layers.1.src_trg_att.q_layer.bias', 'decoder.layers.1.src_trg_att.q_layer.weight', 'decoder.layers.1.src_trg_att.v_layer.bias', 'decoder.layers.1.src_trg_att.v_layer.weight', 'decoder.layers.1.trg_trg_att.k_layer.bias', 'decoder.layers.1.trg_trg_att.k_layer.weight', 'decoder.layers.1.trg_trg_att.output_layer.bias', 'decoder.layers.1.trg_trg_att.output_layer.weight', 'decoder.layers.1.trg_trg_att.q_layer.bias', 'decoder.layers.1.trg_trg_att.q_layer.weight', 'decoder.layers.1.trg_trg_att.v_layer.bias', 'decoder.layers.1.trg_trg_att.v_layer.weight', 'decoder.layers.1.x_layer_norm.bias', 'decoder.layers.1.x_layer_norm.weight', 'decoder.layers.2.dec_layer_norm.bias', 'decoder.layers.2.dec_layer_norm.weight', 'decoder.layers.2.feed_forward.layer_norm.bias', 'decoder.layers.2.feed_forward.layer_norm.weight', 'decoder.layers.2.feed_forward.pwff_layer.0.bias', 'decoder.layers.2.feed_forward.pwff_layer.0.weight', 'decoder.layers.2.feed_forward.pwff_layer.3.bias', 'decoder.layers.2.feed_forward.pwff_layer.3.weight', 'decoder.layers.2.src_trg_att.k_layer.bias', 'decoder.layers.2.src_trg_att.k_layer.weight', 'decoder.layers.2.src_trg_att.output_layer.bias', 'decoder.layers.2.src_trg_att.output_layer.weight', 'decoder.layers.2.src_trg_att.q_layer.bias', 'decoder.layers.2.src_trg_att.q_layer.weight', 'decoder.layers.2.src_trg_att.v_layer.bias', 'decoder.layers.2.src_trg_att.v_layer.weight', 'decoder.layers.2.trg_trg_att.k_layer.bias', 'decoder.layers.2.trg_trg_att.k_layer.weight', 'decoder.layers.2.trg_trg_att.output_layer.bias', 'decoder.layers.2.trg_trg_att.output_layer.weight', 'decoder.layers.2.trg_trg_att.q_layer.bias', 'decoder.layers.2.trg_trg_att.q_layer.weight', 'decoder.layers.2.trg_trg_att.v_layer.bias', 'decoder.layers.2.trg_trg_att.v_layer.weight', 'decoder.layers.2.x_layer_norm.bias', 'decoder.layers.2.x_layer_norm.weight', 'decoder.layers.3.dec_layer_norm.bias', 'decoder.layers.3.dec_layer_norm.weight', 'decoder.layers.3.feed_forward.layer_norm.bias', 'decoder.layers.3.feed_forward.layer_norm.weight', 'decoder.layers.3.feed_forward.pwff_layer.0.bias', 'decoder.layers.3.feed_forward.pwff_layer.0.weight', 'decoder.layers.3.feed_forward.pwff_layer.3.bias', 'decoder.layers.3.feed_forward.pwff_layer.3.weight', 'decoder.layers.3.src_trg_att.k_layer.bias', 'decoder.layers.3.src_trg_att.k_layer.weight', 'decoder.layers.3.src_trg_att.output_layer.bias', 'decoder.layers.3.src_trg_att.output_layer.weight', 'decoder.layers.3.src_trg_att.q_layer.bias', 'decoder.layers.3.src_trg_att.q_layer.weight', 'decoder.layers.3.src_trg_att.v_layer.bias', 'decoder.layers.3.src_trg_att.v_layer.weight', 'decoder.layers.3.trg_trg_att.k_layer.bias', 'decoder.layers.3.trg_trg_att.k_layer.weight', 'decoder.layers.3.trg_trg_att.output_layer.bias', 'decoder.layers.3.trg_trg_att.output_layer.weight', 'decoder.layers.3.trg_trg_att.q_layer.bias', 'decoder.layers.3.trg_trg_att.q_layer.weight', 'decoder.layers.3.trg_trg_att.v_layer.bias', 'decoder.layers.3.trg_trg_att.v_layer.weight', 'decoder.layers.3.x_layer_norm.bias', 'decoder.layers.3.x_layer_norm.weight', 'decoder.layers.4.dec_layer_norm.bias', 'decoder.layers.4.dec_layer_norm.weight', 'decoder.layers.4.feed_forward.layer_norm.bias', 'decoder.layers.4.feed_forward.layer_norm.weight', 'decoder.layers.4.feed_forward.pwff_layer.0.bias', 'decoder.layers.4.feed_forward.pwff_layer.0.weight', 'decoder.layers.4.feed_forward.pwff_layer.3.bias', 'decoder.layers.4.feed_forward.pwff_layer.3.weight', 'decoder.layers.4.src_trg_att.k_layer.bias', 'decoder.layers.4.src_trg_att.k_layer.weight', 'decoder.layers.4.src_trg_att.output_layer.bias', 'decoder.layers.4.src_trg_att.output_layer.weight', 'decoder.layers.4.src_trg_att.q_layer.bias', 'decoder.layers.4.src_trg_att.q_layer.weight', 'decoder.layers.4.src_trg_att.v_layer.bias', 'decoder.layers.4.src_trg_att.v_layer.weight', 'decoder.layers.4.trg_trg_att.k_layer.bias', 'decoder.layers.4.trg_trg_att.k_layer.weight', 'decoder.layers.4.trg_trg_att.output_layer.bias', 'decoder.layers.4.trg_trg_att.output_layer.weight', 'decoder.layers.4.trg_trg_att.q_layer.bias', 'decoder.layers.4.trg_trg_att.q_layer.weight', 'decoder.layers.4.trg_trg_att.v_layer.bias', 'decoder.layers.4.trg_trg_att.v_layer.weight', 'decoder.layers.4.x_layer_norm.bias', 'decoder.layers.4.x_layer_norm.weight', 'decoder.layers.5.dec_layer_norm.bias', 'decoder.layers.5.dec_layer_norm.weight', 'decoder.layers.5.feed_forward.layer_norm.bias', 'decoder.layers.5.feed_forward.layer_norm.weight', 'decoder.layers.5.feed_forward.pwff_layer.0.bias', 'decoder.layers.5.feed_forward.pwff_layer.0.weight', 'decoder.layers.5.feed_forward.pwff_layer.3.bias', 'decoder.layers.5.feed_forward.pwff_layer.3.weight', 'decoder.layers.5.src_trg_att.k_layer.bias', 'decoder.layers.5.src_trg_att.k_layer.weight', 'decoder.layers.5.src_trg_att.output_layer.bias', 'decoder.layers.5.src_trg_att.output_layer.weight', 'decoder.layers.5.src_trg_att.q_layer.bias', 'decoder.layers.5.src_trg_att.q_layer.weight', 'decoder.layers.5.src_trg_att.v_layer.bias', 'decoder.layers.5.src_trg_att.v_layer.weight', 'decoder.layers.5.trg_trg_att.k_layer.bias', 'decoder.layers.5.trg_trg_att.k_layer.weight', 'decoder.layers.5.trg_trg_att.output_layer.bias', 'decoder.layers.5.trg_trg_att.output_layer.weight', 'decoder.layers.5.trg_trg_att.q_layer.bias', 'decoder.layers.5.trg_trg_att.q_layer.weight', 'decoder.layers.5.trg_trg_att.v_layer.bias', 'decoder.layers.5.trg_trg_att.v_layer.weight', 'decoder.layers.5.x_layer_norm.bias', 'decoder.layers.5.x_layer_norm.weight', 'encoder.layer_norm.bias', 'encoder.layer_norm.weight', 'encoder.layers.0.feed_forward.layer_norm.bias', 'encoder.layers.0.feed_forward.layer_norm.weight', 'encoder.layers.0.feed_forward.pwff_layer.0.bias', 'encoder.layers.0.feed_forward.pwff_layer.0.weight', 'encoder.layers.0.feed_forward.pwff_layer.3.bias', 'encoder.layers.0.feed_forward.pwff_layer.3.weight', 'encoder.layers.0.layer_norm.bias', 'encoder.layers.0.layer_norm.weight', 'encoder.layers.0.src_src_att.k_layer.bias', 'encoder.layers.0.src_src_att.k_layer.weight', 'encoder.layers.0.src_src_att.output_layer.bias', 'encoder.layers.0.src_src_att.output_layer.weight', 'encoder.layers.0.src_src_att.q_layer.bias', 'encoder.layers.0.src_src_att.q_layer.weight', 'encoder.layers.0.src_src_att.v_layer.bias', 'encoder.layers.0.src_src_att.v_layer.weight', 'encoder.layers.1.feed_forward.layer_norm.bias', 'encoder.layers.1.feed_forward.layer_norm.weight', 'encoder.layers.1.feed_forward.pwff_layer.0.bias', 'encoder.layers.1.feed_forward.pwff_layer.0.weight', 'encoder.layers.1.feed_forward.pwff_layer.3.bias', 'encoder.layers.1.feed_forward.pwff_layer.3.weight', 'encoder.layers.1.layer_norm.bias', 'encoder.layers.1.layer_norm.weight', 'encoder.layers.1.src_src_att.k_layer.bias', 'encoder.layers.1.src_src_att.k_layer.weight', 'encoder.layers.1.src_src_att.output_layer.bias', 'encoder.layers.1.src_src_att.output_layer.weight', 'encoder.layers.1.src_src_att.q_layer.bias', 'encoder.layers.1.src_src_att.q_layer.weight', 'encoder.layers.1.src_src_att.v_layer.bias', 'encoder.layers.1.src_src_att.v_layer.weight', 'encoder.layers.2.feed_forward.layer_norm.bias', 'encoder.layers.2.feed_forward.layer_norm.weight', 'encoder.layers.2.feed_forward.pwff_layer.0.bias', 'encoder.layers.2.feed_forward.pwff_layer.0.weight', 'encoder.layers.2.feed_forward.pwff_layer.3.bias', 'encoder.layers.2.feed_forward.pwff_layer.3.weight', 'encoder.layers.2.layer_norm.bias', 'encoder.layers.2.layer_norm.weight', 'encoder.layers.2.src_src_att.k_layer.bias', 'encoder.layers.2.src_src_att.k_layer.weight', 'encoder.layers.2.src_src_att.output_layer.bias', 'encoder.layers.2.src_src_att.output_layer.weight', 'encoder.layers.2.src_src_att.q_layer.bias', 'encoder.layers.2.src_src_att.q_layer.weight', 'encoder.layers.2.src_src_att.v_layer.bias', 'encoder.layers.2.src_src_att.v_layer.weight', 'encoder.layers.3.feed_forward.layer_norm.bias', 'encoder.layers.3.feed_forward.layer_norm.weight', 'encoder.layers.3.feed_forward.pwff_layer.0.bias', 'encoder.layers.3.feed_forward.pwff_layer.0.weight', 'encoder.layers.3.feed_forward.pwff_layer.3.bias', 'encoder.layers.3.feed_forward.pwff_layer.3.weight', 'encoder.layers.3.layer_norm.bias', 'encoder.layers.3.layer_norm.weight', 'encoder.layers.3.src_src_att.k_layer.bias', 'encoder.layers.3.src_src_att.k_layer.weight', 'encoder.layers.3.src_src_att.output_layer.bias', 'encoder.layers.3.src_src_att.output_layer.weight', 'encoder.layers.3.src_src_att.q_layer.bias', 'encoder.layers.3.src_src_att.q_layer.weight', 'encoder.layers.3.src_src_att.v_layer.bias', 'encoder.layers.3.src_src_att.v_layer.weight', 'encoder.layers.4.feed_forward.layer_norm.bias', 'encoder.layers.4.feed_forward.layer_norm.weight', 'encoder.layers.4.feed_forward.pwff_layer.0.bias', 'encoder.layers.4.feed_forward.pwff_layer.0.weight', 'encoder.layers.4.feed_forward.pwff_layer.3.bias', 'encoder.layers.4.feed_forward.pwff_layer.3.weight', 'encoder.layers.4.layer_norm.bias', 'encoder.layers.4.layer_norm.weight', 'encoder.layers.4.src_src_att.k_layer.bias', 'encoder.layers.4.src_src_att.k_layer.weight', 'encoder.layers.4.src_src_att.output_layer.bias', 'encoder.layers.4.src_src_att.output_layer.weight', 'encoder.layers.4.src_src_att.q_layer.bias', 'encoder.layers.4.src_src_att.q_layer.weight', 'encoder.layers.4.src_src_att.v_layer.bias', 'encoder.layers.4.src_src_att.v_layer.weight', 'encoder.layers.5.feed_forward.layer_norm.bias', 'encoder.layers.5.feed_forward.layer_norm.weight', 'encoder.layers.5.feed_forward.pwff_layer.0.bias', 'encoder.layers.5.feed_forward.pwff_layer.0.weight', 'encoder.layers.5.feed_forward.pwff_layer.3.bias', 'encoder.layers.5.feed_forward.pwff_layer.3.weight', 'encoder.layers.5.layer_norm.bias', 'encoder.layers.5.layer_norm.weight', 'encoder.layers.5.src_src_att.k_layer.bias', 'encoder.layers.5.src_src_att.k_layer.weight', 'encoder.layers.5.src_src_att.output_layer.bias', 'encoder.layers.5.src_src_att.output_layer.weight', 'encoder.layers.5.src_src_att.q_layer.bias', 'encoder.layers.5.src_src_att.q_layer.weight', 'encoder.layers.5.src_src_att.v_layer.bias', 'encoder.layers.5.src_src_att.v_layer.weight', 'src_embed.lut.weight']\n", "2019-12-30 08:41:54,850 cfg.name : enurh_transformer\n", "2019-12-30 08:41:54,850 cfg.data.src : en\n", "2019-12-30 08:41:54,850 cfg.data.trg : urh\n", "2019-12-30 08:41:54,851 cfg.data.train : data/enurh/train.bpe\n", "2019-12-30 08:41:54,851 cfg.data.dev : data/enurh/dev.bpe\n", "2019-12-30 08:41:54,851 cfg.data.test : data/enurh/test.bpe\n", "2019-12-30 08:41:54,851 cfg.data.level : bpe\n", "2019-12-30 08:41:54,851 cfg.data.lowercase : False\n", "2019-12-30 08:41:54,851 cfg.data.max_sent_length : 100\n", "2019-12-30 08:41:54,851 cfg.data.src_vocab : data/enurh/vocab.txt\n", "2019-12-30 08:41:54,851 cfg.data.trg_vocab : data/enurh/vocab.txt\n", "2019-12-30 08:41:54,851 cfg.testing.beam_size : 5\n", "2019-12-30 08:41:54,851 cfg.testing.alpha : 1.0\n", "2019-12-30 08:41:54,851 cfg.training.random_seed : 42\n", "2019-12-30 08:41:54,851 cfg.training.optimizer : adam\n", "2019-12-30 08:41:54,851 cfg.training.normalization : tokens\n", "2019-12-30 08:41:54,851 cfg.training.adam_betas : [0.9, 0.999]\n", "2019-12-30 08:41:54,851 cfg.training.scheduling : plateau\n", "2019-12-30 08:41:54,851 cfg.training.patience : 5\n", "2019-12-30 08:41:54,852 cfg.training.learning_rate_factor : 0.5\n", "2019-12-30 08:41:54,852 cfg.training.learning_rate_warmup : 1000\n", "2019-12-30 08:41:54,852 cfg.training.decrease_factor : 0.7\n", "2019-12-30 08:41:54,852 cfg.training.loss : crossentropy\n", "2019-12-30 08:41:54,852 cfg.training.learning_rate : 0.0003\n", "2019-12-30 08:41:54,852 cfg.training.learning_rate_min : 1e-08\n", "2019-12-30 08:41:54,852 cfg.training.weight_decay : 0.0\n", "2019-12-30 08:41:54,852 cfg.training.label_smoothing : 0.1\n", "2019-12-30 08:41:54,852 cfg.training.batch_size : 4096\n", "2019-12-30 08:41:54,852 cfg.training.batch_type : token\n", "2019-12-30 08:41:54,852 cfg.training.eval_batch_size : 3600\n", "2019-12-30 08:41:54,852 cfg.training.eval_batch_type : token\n", "2019-12-30 08:41:54,852 cfg.training.batch_multiplier : 1\n", "2019-12-30 08:41:54,852 cfg.training.early_stopping_metric : ppl\n", "2019-12-30 08:41:54,852 cfg.training.epochs : 150\n", "2019-12-30 08:41:54,852 cfg.training.validation_freq : 1000\n", "2019-12-30 08:41:54,852 cfg.training.logging_freq : 100\n", "2019-12-30 08:41:54,852 cfg.training.eval_metric : bleu\n", "2019-12-30 08:41:54,853 cfg.training.model_dir : models/enurh_transformer\n", "2019-12-30 08:41:54,853 cfg.training.overwrite : True\n", "2019-12-30 08:41:54,853 cfg.training.shuffle : True\n", "2019-12-30 08:41:54,853 cfg.training.use_cuda : True\n", "2019-12-30 08:41:54,853 cfg.training.max_output_length : 100\n", "2019-12-30 08:41:54,853 cfg.training.print_valid_sents : [0, 1, 2, 3]\n", "2019-12-30 08:41:54,853 cfg.training.keep_last_ckpts : 3\n", "2019-12-30 08:41:54,853 cfg.model.initializer : xavier\n", "2019-12-30 08:41:54,853 cfg.model.bias_initializer : zeros\n", "2019-12-30 08:41:54,853 cfg.model.init_gain : 1.0\n", "2019-12-30 08:41:54,853 cfg.model.embed_initializer : xavier\n", "2019-12-30 08:41:54,853 cfg.model.embed_init_gain : 1.0\n", "2019-12-30 08:41:54,853 cfg.model.tied_embeddings : True\n", "2019-12-30 08:41:54,853 cfg.model.tied_softmax : True\n", "2019-12-30 08:41:54,853 cfg.model.encoder.type : transformer\n", "2019-12-30 08:41:54,853 cfg.model.encoder.num_layers : 6\n", "2019-12-30 08:41:54,853 cfg.model.encoder.num_heads : 4\n", "2019-12-30 08:41:54,853 cfg.model.encoder.embeddings.embedding_dim : 256\n", "2019-12-30 08:41:54,853 cfg.model.encoder.embeddings.scale : True\n", "2019-12-30 08:41:54,854 cfg.model.encoder.embeddings.dropout : 0.2\n", "2019-12-30 08:41:54,854 cfg.model.encoder.hidden_size : 256\n", "2019-12-30 08:41:54,854 cfg.model.encoder.ff_size : 1024\n", "2019-12-30 08:41:54,854 cfg.model.encoder.dropout : 0.3\n", "2019-12-30 08:41:54,854 cfg.model.decoder.type : transformer\n", "2019-12-30 08:41:54,854 cfg.model.decoder.num_layers : 6\n", "2019-12-30 08:41:54,854 cfg.model.decoder.num_heads : 4\n", "2019-12-30 08:41:54,854 cfg.model.decoder.embeddings.embedding_dim : 256\n", "2019-12-30 08:41:54,854 cfg.model.decoder.embeddings.scale : True\n", "2019-12-30 08:41:54,854 cfg.model.decoder.embeddings.dropout : 0.2\n", "2019-12-30 08:41:54,854 cfg.model.decoder.hidden_size : 256\n", "2019-12-30 08:41:54,854 cfg.model.decoder.ff_size : 1024\n", "2019-12-30 08:41:54,854 cfg.model.decoder.dropout : 0.3\n", "2019-12-30 08:41:54,854 Data set sizes: \n", "\ttrain 25605,\n", "\tvalid 1000,\n", "\ttest 2652\n", "2019-12-30 08:41:54,854 First training example:\n", "\t[SRC] The number of publishers is now about ten times what it was when I began serving here .\n", "\t[TRG] I@@ ghwoghwota rehẹ ẹkuotọ na enẹna vwẹ ọh@@ wọ@@ h@@ wọ ihwe vwo bun vrẹ obo rọ hepha ọke me vwọ ga vwẹ oboyin .\n", "2019-12-30 08:41:54,854 First 10 words (src): (0) (1) (2) (3) (4) . (5) , (6) rẹ (7) the (8) to (9) na\n", "2019-12-30 08:41:54,855 First 10 words (trg): (0) (1) (2) (3) (4) . (5) , (6) rẹ (7) the (8) to (9) na\n", "2019-12-30 08:41:54,855 Number of Src words (types): 4138\n", "2019-12-30 08:41:54,855 Number of Trg words (types): 4138\n", "2019-12-30 08:41:54,855 Model(\n", "\tencoder=TransformerEncoder(num_layers=6, num_heads=4),\n", "\tdecoder=TransformerDecoder(num_layers=6, num_heads=4),\n", "\tsrc_embed=Embeddings(embedding_dim=256, vocab_size=4138),\n", "\ttrg_embed=Embeddings(embedding_dim=256, vocab_size=4138))\n", "2019-12-30 08:41:54,859 EPOCH 1\n", "2019-12-30 08:42:09,261 Epoch 1 Step: 100 Batch Loss: 5.333814 Tokens per Sec: 14488, Lr: 0.000300\n", "2019-12-30 08:42:23,693 Epoch 1 Step: 200 Batch Loss: 4.835469 Tokens per Sec: 14461, Lr: 0.000300\n", "2019-12-30 08:42:37,513 Epoch 1: total training loss 1509.76\n", "2019-12-30 08:42:37,514 EPOCH 2\n", "2019-12-30 08:42:38,118 Epoch 2 Step: 300 Batch Loss: 4.461406 Tokens per Sec: 12421, Lr: 0.000300\n", "2019-12-30 08:42:52,781 Epoch 2 Step: 400 Batch Loss: 4.489047 Tokens per Sec: 14223, Lr: 0.000300\n", "2019-12-30 08:43:07,327 Epoch 2 Step: 500 Batch Loss: 4.257133 Tokens per Sec: 14070, Lr: 0.000300\n", "2019-12-30 08:43:20,803 Epoch 2: total training loss 1248.57\n", "2019-12-30 08:43:20,803 EPOCH 3\n", "2019-12-30 08:43:22,074 Epoch 3 Step: 600 Batch Loss: 3.990062 Tokens per Sec: 13409, Lr: 0.000300\n", "2019-12-30 08:43:36,904 Epoch 3 Step: 700 Batch Loss: 3.748931 Tokens per Sec: 14040, Lr: 0.000300\n", "2019-12-30 08:43:51,709 Epoch 3 Step: 800 Batch Loss: 3.796058 Tokens per Sec: 13843, Lr: 0.000300\n", "2019-12-30 08:44:04,970 Epoch 3: total training loss 1113.12\n", "2019-12-30 08:44:04,970 EPOCH 4\n", "2019-12-30 08:44:06,687 Epoch 4 Step: 900 Batch Loss: 3.562904 Tokens per Sec: 13910, Lr: 0.000300\n", "2019-12-30 08:44:21,800 Epoch 4 Step: 1000 Batch Loss: 3.535983 Tokens per Sec: 13939, Lr: 0.000300\n", "2019-12-30 08:45:06,397 Hooray! New best validation result [ppl]!\n", "2019-12-30 08:45:06,397 Saving new checkpoint.\n", "2019-12-30 08:45:06,622 Example #0\n", "2019-12-30 08:45:06,623 \tSource: These orchestral arrangements are composed in such a way that they will prepare our heart and mind for the program to follow .\n", "2019-12-30 08:45:06,623 \tReference: E ru uhworo nana vwẹ idjerhe ro de se muegbe rẹ ubiudu rẹ avwanre hẹrhẹ ọrhuẹrẹphiyọ rẹ ẹdẹ yena .\n", "2019-12-30 08:45:06,623 \tHypothesis: ( 1 Kọr . 2 : 1 ) Ẹkẹvuọvo , avwanre de vwo ẹguọnọ rẹ avwanre , ọ da dianẹ avwanre vwo ẹguọnọ rẹ avwanre vwo ẹguọnọ rẹ avwanre .\n", "2019-12-30 08:45:06,623 Example #1\n", "2019-12-30 08:45:06,623 \tSource: Today he is serving at Bethel .\n", "2019-12-30 08:45:06,623 \tReference: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 08:45:06,623 \tHypothesis: ( 1 Kọr . 4 : 1 ) Ẹkẹvuọvo , ọ da vwẹ ukẹcha kẹ avwanre .\n", "2019-12-30 08:45:06,623 Example #2\n", "2019-12-30 08:45:06,624 \tSource: But freedom from what ?\n", "2019-12-30 08:45:06,624 \tReference: Ẹkẹvuọvo , ẹdia vọ yen egbomọphẹ na che si ayen nu ?\n", "2019-12-30 08:45:06,624 \tHypothesis: Mavọ yen a sa vwọ kẹ avwanre ?\n", "2019-12-30 08:45:06,624 Example #3\n", "2019-12-30 08:45:06,624 \tSource: Avoid comparing your new congregation with your previous one .\n", "2019-12-30 08:45:06,624 \tReference: Wọ vwẹ ukoko kpokpọ na vwọ vwanvwen ọ rẹ wo nurhe na - a .\n", "2019-12-30 08:45:06,624 \tHypothesis: ( 1 Kọr . 2 : 1 ) ( a ) ( 1 ) ( 1 ) ( 1 ) ( 1 ) ( 1 Kọr .\n", "2019-12-30 08:45:06,624 Validation result (greedy) at epoch 4, step 1000: bleu: 1.38, loss: 82894.2578, ppl: 33.5709, duration: 44.8242s\n", "2019-12-30 08:45:21,833 Epoch 4 Step: 1100 Batch Loss: 3.485683 Tokens per Sec: 13067, Lr: 0.000300\n", "2019-12-30 08:45:34,954 Epoch 4: total training loss 1038.29\n", "2019-12-30 08:45:34,954 EPOCH 5\n", "2019-12-30 08:45:37,381 Epoch 5 Step: 1200 Batch Loss: 3.379636 Tokens per Sec: 13417, Lr: 0.000300\n", "2019-12-30 08:45:52,700 Epoch 5 Step: 1300 Batch Loss: 3.151871 Tokens per Sec: 13524, Lr: 0.000300\n", "2019-12-30 08:46:08,001 Epoch 5 Step: 1400 Batch Loss: 3.243618 Tokens per Sec: 13434, Lr: 0.000300\n", "2019-12-30 08:46:20,293 Epoch 5: total training loss 981.76\n", "2019-12-30 08:46:20,293 EPOCH 6\n", "2019-12-30 08:46:23,333 Epoch 6 Step: 1500 Batch Loss: 2.847438 Tokens per Sec: 13122, Lr: 0.000300\n", "2019-12-30 08:46:38,793 Epoch 6 Step: 1600 Batch Loss: 3.177434 Tokens per Sec: 13504, Lr: 0.000300\n", "2019-12-30 08:46:54,261 Epoch 6 Step: 1700 Batch Loss: 3.327428 Tokens per Sec: 13739, Lr: 0.000300\n", "2019-12-30 08:47:05,543 Epoch 6: total training loss 931.08\n", "2019-12-30 08:47:05,544 EPOCH 7\n", "2019-12-30 08:47:09,567 Epoch 7 Step: 1800 Batch Loss: 3.168892 Tokens per Sec: 13699, Lr: 0.000300\n", "2019-12-30 08:47:24,646 Epoch 7 Step: 1900 Batch Loss: 2.927883 Tokens per Sec: 13340, Lr: 0.000300\n", "2019-12-30 08:47:39,866 Epoch 7 Step: 2000 Batch Loss: 2.646217 Tokens per Sec: 13390, Lr: 0.000300\n", "2019-12-30 08:48:25,096 Hooray! New best validation result [ppl]!\n", "2019-12-30 08:48:25,096 Saving new checkpoint.\n", "2019-12-30 08:48:25,316 Example #0\n", "2019-12-30 08:48:25,317 \tSource: These orchestral arrangements are composed in such a way that they will prepare our heart and mind for the program to follow .\n", "2019-12-30 08:48:25,317 \tReference: E ru uhworo nana vwẹ idjerhe ro de se muegbe rẹ ubiudu rẹ avwanre hẹrhẹ ọrhuẹrẹphiyọ rẹ ẹdẹ yena .\n", "2019-12-30 08:48:25,317 \tHypothesis: ( 1 Kọr . 2 : 1 ) Ekpako na da tobọ dianẹ ayen se vwo ẹguọnọ rẹ avwanre , ji vwo kpahen oborẹ ayen sa vwọ chọn ayen uko .\n", "2019-12-30 08:48:25,317 Example #1\n", "2019-12-30 08:48:25,317 \tSource: Today he is serving at Bethel .\n", "2019-12-30 08:48:25,317 \tReference: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 08:48:25,317 \tHypothesis: Ayen da vwẹ ukpe rẹ 1987 .\n", "2019-12-30 08:48:25,317 Example #2\n", "2019-12-30 08:48:25,318 \tSource: But freedom from what ?\n", "2019-12-30 08:48:25,318 \tReference: Ẹkẹvuọvo , ẹdia vọ yen egbomọphẹ na che si ayen nu ?\n", "2019-12-30 08:48:25,318 \tHypothesis: Ẹkẹvuọvo , die yen e se vwo ?\n", "2019-12-30 08:48:25,318 Example #3\n", "2019-12-30 08:48:25,318 \tSource: Avoid comparing your new congregation with your previous one .\n", "2019-12-30 08:48:25,318 \tReference: Wọ vwẹ ukoko kpokpọ na vwọ vwanvwen ọ rẹ wo nurhe na - a .\n", "2019-12-30 08:48:25,318 \tHypothesis: Wọ sa mrẹ erere vwo nẹ wo vwo ruiruo vwẹ ukoko na .\n", "2019-12-30 08:48:25,318 Validation result (greedy) at epoch 7, step 2000: bleu: 3.39, loss: 71127.9922, ppl: 20.3875, duration: 45.4516s\n", "2019-12-30 08:48:36,574 Epoch 7: total training loss 906.96\n", "2019-12-30 08:48:36,574 EPOCH 8\n", "2019-12-30 08:48:40,899 Epoch 8 Step: 2100 Batch Loss: 3.059584 Tokens per Sec: 13779, Lr: 0.000300\n", "2019-12-30 08:48:56,116 Epoch 8 Step: 2200 Batch Loss: 2.695834 Tokens per Sec: 13231, Lr: 0.000300\n", "2019-12-30 08:49:11,387 Epoch 8 Step: 2300 Batch Loss: 2.762779 Tokens per Sec: 13531, Lr: 0.000300\n", "2019-12-30 08:49:22,205 Epoch 8: total training loss 865.01\n", "2019-12-30 08:49:22,205 EPOCH 9\n", "2019-12-30 08:49:26,753 Epoch 9 Step: 2400 Batch Loss: 3.282227 Tokens per Sec: 13201, Lr: 0.000300\n", "2019-12-30 08:49:42,210 Epoch 9 Step: 2500 Batch Loss: 3.134735 Tokens per Sec: 13617, Lr: 0.000300\n", "2019-12-30 08:49:57,564 Epoch 9 Step: 2600 Batch Loss: 2.445373 Tokens per Sec: 13352, Lr: 0.000300\n", "2019-12-30 08:50:07,617 Epoch 9: total training loss 833.76\n", "2019-12-30 08:50:07,618 EPOCH 10\n", "2019-12-30 08:50:12,905 Epoch 10 Step: 2700 Batch Loss: 2.302863 Tokens per Sec: 13712, Lr: 0.000300\n", "2019-12-30 08:50:28,253 Epoch 10 Step: 2800 Batch Loss: 2.886033 Tokens per Sec: 13682, Lr: 0.000300\n", "2019-12-30 08:50:43,622 Epoch 10 Step: 2900 Batch Loss: 2.831291 Tokens per Sec: 13305, Lr: 0.000300\n", "2019-12-30 08:50:52,835 Epoch 10: total training loss 803.16\n", "2019-12-30 08:50:52,835 EPOCH 11\n", "2019-12-30 08:50:59,039 Epoch 11 Step: 3000 Batch Loss: 2.710369 Tokens per Sec: 13227, Lr: 0.000300\n", "2019-12-30 08:51:44,396 Hooray! New best validation result [ppl]!\n", "2019-12-30 08:51:44,396 Saving new checkpoint.\n", "2019-12-30 08:51:44,660 Example #0\n", "2019-12-30 08:51:44,660 \tSource: These orchestral arrangements are composed in such a way that they will prepare our heart and mind for the program to follow .\n", "2019-12-30 08:51:44,660 \tReference: E ru uhworo nana vwẹ idjerhe ro de se muegbe rẹ ubiudu rẹ avwanre hẹrhẹ ọrhuẹrẹphiyọ rẹ ẹdẹ yena .\n", "2019-12-30 08:51:44,660 \tHypothesis: Enana cha nẹrhẹ ayen riẹn oborẹ ayen sa vwọ vwẹ ukẹcha kẹ ihwo efa , ji vwo ẹruọ rẹ oborẹ ayen cha vwọ chọn avwanre uko .\n", "2019-12-30 08:51:44,660 Example #1\n", "2019-12-30 08:51:44,661 \tSource: Today he is serving at Bethel .\n", "2019-12-30 08:51:44,661 \tReference: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 08:51:44,661 \tHypothesis: Nonẹna , ọ je ga vwẹ Bẹtẹl .\n", "2019-12-30 08:51:44,661 Example #2\n", "2019-12-30 08:51:44,661 \tSource: But freedom from what ?\n", "2019-12-30 08:51:44,661 \tReference: Ẹkẹvuọvo , ẹdia vọ yen egbomọphẹ na che si ayen nu ?\n", "2019-12-30 08:51:44,661 \tHypothesis: Ẹkẹvuọvo , die yen nẹrhẹ e vwo uruemu rẹ egbomọphẹ ?\n", "2019-12-30 08:51:44,661 Example #3\n", "2019-12-30 08:51:44,661 \tSource: Avoid comparing your new congregation with your previous one .\n", "2019-12-30 08:51:44,661 \tReference: Wọ vwẹ ukoko kpokpọ na vwọ vwanvwen ọ rẹ wo nurhe na - a .\n", "2019-12-30 08:51:44,661 \tHypothesis: Wọ da guọnọ ukẹcha rẹ ukoko wẹn .\n", "2019-12-30 08:51:44,661 Validation result (greedy) at epoch 11, step 3000: bleu: 5.88, loss: 64387.6484, ppl: 15.3209, duration: 45.6222s\n", "2019-12-30 08:51:59,941 Epoch 11 Step: 3100 Batch Loss: 2.819924 Tokens per Sec: 13163, Lr: 0.000300\n", "2019-12-30 08:52:15,252 Epoch 11 Step: 3200 Batch Loss: 2.732879 Tokens per Sec: 13507, Lr: 0.000300\n", "2019-12-30 08:52:24,251 Epoch 11: total training loss 795.02\n", "2019-12-30 08:52:24,251 EPOCH 12\n", "2019-12-30 08:52:30,625 Epoch 12 Step: 3300 Batch Loss: 2.550137 Tokens per Sec: 13489, Lr: 0.000300\n", "2019-12-30 08:52:45,913 Epoch 12 Step: 3400 Batch Loss: 2.107007 Tokens per Sec: 13496, Lr: 0.000300\n", "2019-12-30 08:53:01,317 Epoch 12 Step: 3500 Batch Loss: 2.299843 Tokens per Sec: 13637, Lr: 0.000300\n", "2019-12-30 08:53:09,698 Epoch 12: total training loss 765.54\n", "2019-12-30 08:53:09,698 EPOCH 13\n", "2019-12-30 08:53:16,564 Epoch 13 Step: 3600 Batch Loss: 2.498705 Tokens per Sec: 13186, Lr: 0.000300\n", "2019-12-30 08:53:31,870 Epoch 13 Step: 3700 Batch Loss: 2.611616 Tokens per Sec: 13545, Lr: 0.000300\n", "2019-12-30 08:53:47,267 Epoch 13 Step: 3800 Batch Loss: 2.487637 Tokens per Sec: 13486, Lr: 0.000300\n", "2019-12-30 08:53:55,107 Epoch 13: total training loss 747.17\n", "2019-12-30 08:53:55,108 EPOCH 14\n", "2019-12-30 08:54:02,679 Epoch 14 Step: 3900 Batch Loss: 2.895444 Tokens per Sec: 13149, Lr: 0.000300\n", "2019-12-30 08:54:18,012 Epoch 14 Step: 4000 Batch Loss: 2.870493 Tokens per Sec: 13466, Lr: 0.000300\n", "2019-12-30 08:55:03,258 Hooray! New best validation result [ppl]!\n", "2019-12-30 08:55:03,258 Saving new checkpoint.\n", "2019-12-30 08:55:03,514 Example #0\n", "2019-12-30 08:55:03,514 \tSource: These orchestral arrangements are composed in such a way that they will prepare our heart and mind for the program to follow .\n", "2019-12-30 08:55:03,514 \tReference: E ru uhworo nana vwẹ idjerhe ro de se muegbe rẹ ubiudu rẹ avwanre hẹrhẹ ọrhuẹrẹphiyọ rẹ ẹdẹ yena .\n", "2019-12-30 08:55:03,514 \tHypothesis: Iruemu nana djerephia nẹ ayen che se vwo ẹruọ rẹ oborẹ ẹwẹn rẹ avwanre se vwo ruiruo vwẹ idjerhe rẹ ẹwẹn rẹ ẹwẹn , je cha nẹrhẹ e vwo ẹwẹn rẹ avwanre .\n", "2019-12-30 08:55:03,514 Example #1\n", "2019-12-30 08:55:03,514 \tSource: Today he is serving at Bethel .\n", "2019-12-30 08:55:03,514 \tReference: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 08:55:03,515 \tHypothesis: Nonẹna , ọ je ga vwẹ Bẹtẹl .\n", "2019-12-30 08:55:03,515 Example #2\n", "2019-12-30 08:55:03,515 \tSource: But freedom from what ?\n", "2019-12-30 08:55:03,515 \tReference: Ẹkẹvuọvo , ẹdia vọ yen egbomọphẹ na che si ayen nu ?\n", "2019-12-30 08:55:03,515 \tHypothesis: Ẹkẹvuọvo , die yen egbomọphẹ ?\n", "2019-12-30 08:55:03,515 Example #3\n", "2019-12-30 08:55:03,515 \tSource: Avoid comparing your new congregation with your previous one .\n", "2019-12-30 08:55:03,515 \tReference: Wọ vwẹ ukoko kpokpọ na vwọ vwanvwen ọ rẹ wo nurhe na - a .\n", "2019-12-30 08:55:03,515 \tHypothesis: Wọ da guọnọ dia ukoko na , wọ je guọnọ nẹ ukoko wẹn rhe .\n", "2019-12-30 08:55:03,515 Validation result (greedy) at epoch 14, step 4000: bleu: 7.95, loss: 60269.4805, ppl: 12.8669, duration: 45.5034s\n", "2019-12-30 08:55:18,898 Epoch 14 Step: 4100 Batch Loss: 2.274558 Tokens per Sec: 13370, Lr: 0.000300\n", "2019-12-30 08:55:26,410 Epoch 14: total training loss 737.72\n", "2019-12-30 08:55:26,411 EPOCH 15\n", "2019-12-30 08:55:34,036 Epoch 15 Step: 4200 Batch Loss: 1.771384 Tokens per Sec: 13415, Lr: 0.000300\n", "2019-12-30 08:55:49,408 Epoch 15 Step: 4300 Batch Loss: 2.053605 Tokens per Sec: 13602, Lr: 0.000300\n", "2019-12-30 08:56:04,831 Epoch 15 Step: 4400 Batch Loss: 2.435067 Tokens per Sec: 13230, Lr: 0.000300\n", "2019-12-30 08:56:12,033 Epoch 15: total training loss 717.46\n", "2019-12-30 08:56:12,033 EPOCH 16\n", "2019-12-30 08:56:20,073 Epoch 16 Step: 4500 Batch Loss: 2.078786 Tokens per Sec: 13494, Lr: 0.000300\n", "2019-12-30 08:56:35,484 Epoch 16 Step: 4600 Batch Loss: 2.717738 Tokens per Sec: 13580, Lr: 0.000300\n", "2019-12-30 08:56:50,684 Epoch 16 Step: 4700 Batch Loss: 2.417445 Tokens per Sec: 13626, Lr: 0.000300\n", "2019-12-30 08:56:57,285 Epoch 16: total training loss 694.98\n", "2019-12-30 08:56:57,285 EPOCH 17\n", "2019-12-30 08:57:06,040 Epoch 17 Step: 4800 Batch Loss: 2.413284 Tokens per Sec: 13286, Lr: 0.000300\n", "2019-12-30 08:57:21,502 Epoch 17 Step: 4900 Batch Loss: 2.445009 Tokens per Sec: 13588, Lr: 0.000300\n", "2019-12-30 08:57:36,806 Epoch 17 Step: 5000 Batch Loss: 2.829042 Tokens per Sec: 13608, Lr: 0.000300\n", "2019-12-30 08:58:22,133 Hooray! New best validation result [ppl]!\n", "2019-12-30 08:58:22,133 Saving new checkpoint.\n", "2019-12-30 08:58:22,412 Example #0\n", "2019-12-30 08:58:22,412 \tSource: These orchestral arrangements are composed in such a way that they will prepare our heart and mind for the program to follow .\n", "2019-12-30 08:58:22,412 \tReference: E ru uhworo nana vwẹ idjerhe ro de se muegbe rẹ ubiudu rẹ avwanre hẹrhẹ ọrhuẹrẹphiyọ rẹ ẹdẹ yena .\n", "2019-12-30 08:58:22,412 \tHypothesis: Ọnana yen idjerhe ọvo rẹ ayen se vwo ru iroro rẹ avwanre phiyọ , rere ayen se vwo vwo ẹwẹn obrorhiẹn rẹ avwanre .\n", "2019-12-30 08:58:22,412 Example #1\n", "2019-12-30 08:58:22,412 \tSource: Today he is serving at Bethel .\n", "2019-12-30 08:58:22,412 \tReference: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 08:58:22,412 \tHypothesis: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 08:58:22,412 Example #2\n", "2019-12-30 08:58:22,413 \tSource: But freedom from what ?\n", "2019-12-30 08:58:22,413 \tReference: Ẹkẹvuọvo , ẹdia vọ yen egbomọphẹ na che si ayen nu ?\n", "2019-12-30 08:58:22,413 \tHypothesis: Ẹkẹvuọvo , die yen egbomọphẹ ?\n", "2019-12-30 08:58:22,413 Example #3\n", "2019-12-30 08:58:22,413 \tSource: Avoid comparing your new congregation with your previous one .\n", "2019-12-30 08:58:22,413 \tReference: Wọ vwẹ ukoko kpokpọ na vwọ vwanvwen ọ rẹ wo nurhe na - a .\n", "2019-12-30 08:58:22,413 \tHypothesis: Wọ vwẹ ukoko na vwo ruiruo vwẹ ukoko na .\n", "2019-12-30 08:58:22,413 Validation result (greedy) at epoch 17, step 5000: bleu: 9.63, loss: 57490.7852, ppl: 11.4373, duration: 45.6072s\n", "2019-12-30 08:58:28,129 Epoch 17: total training loss 678.36\n", "2019-12-30 08:58:28,129 EPOCH 18\n", "2019-12-30 08:58:37,953 Epoch 18 Step: 5100 Batch Loss: 2.672159 Tokens per Sec: 13469, Lr: 0.000300\n", "2019-12-30 08:58:53,086 Epoch 18 Step: 5200 Batch Loss: 2.490231 Tokens per Sec: 13534, Lr: 0.000300\n", "2019-12-30 08:59:08,432 Epoch 18 Step: 5300 Batch Loss: 2.602473 Tokens per Sec: 13550, Lr: 0.000300\n", "2019-12-30 08:59:13,341 Epoch 18: total training loss 668.35\n", "2019-12-30 08:59:13,341 EPOCH 19\n", "2019-12-30 08:59:23,728 Epoch 19 Step: 5400 Batch Loss: 2.261767 Tokens per Sec: 13329, Lr: 0.000300\n", "2019-12-30 08:59:39,110 Epoch 19 Step: 5500 Batch Loss: 2.301892 Tokens per Sec: 13728, Lr: 0.000300\n", "2019-12-30 08:59:54,289 Epoch 19 Step: 5600 Batch Loss: 2.302068 Tokens per Sec: 13455, Lr: 0.000300\n", "2019-12-30 08:59:58,675 Epoch 19: total training loss 657.99\n", "2019-12-30 08:59:58,675 EPOCH 20\n", "2019-12-30 09:00:09,798 Epoch 20 Step: 5700 Batch Loss: 2.304246 Tokens per Sec: 13501, Lr: 0.000300\n", "2019-12-30 09:00:25,046 Epoch 20 Step: 5800 Batch Loss: 1.931567 Tokens per Sec: 13422, Lr: 0.000300\n", "2019-12-30 09:00:40,469 Epoch 20 Step: 5900 Batch Loss: 2.479141 Tokens per Sec: 13652, Lr: 0.000300\n", "2019-12-30 09:00:43,993 Epoch 20: total training loss 644.58\n", "2019-12-30 09:00:43,993 EPOCH 21\n", "2019-12-30 09:00:55,756 Epoch 21 Step: 6000 Batch Loss: 2.200478 Tokens per Sec: 13348, Lr: 0.000300\n", "2019-12-30 09:01:41,130 Hooray! New best validation result [ppl]!\n", "2019-12-30 09:01:41,130 Saving new checkpoint.\n", "2019-12-30 09:01:41,410 Example #0\n", "2019-12-30 09:01:41,411 \tSource: These orchestral arrangements are composed in such a way that they will prepare our heart and mind for the program to follow .\n", "2019-12-30 09:01:41,411 \tReference: E ru uhworo nana vwẹ idjerhe ro de se muegbe rẹ ubiudu rẹ avwanre hẹrhẹ ọrhuẹrẹphiyọ rẹ ẹdẹ yena .\n", "2019-12-30 09:01:41,411 \tHypothesis: Enana cha nẹrhẹ a riẹn oborẹ ayen che ru , ji vwo ẹwẹn rẹ iroro rẹ avwanre ji nene odjekẹ rẹ Baibol na .\n", "2019-12-30 09:01:41,411 Example #1\n", "2019-12-30 09:01:41,411 \tSource: Today he is serving at Bethel .\n", "2019-12-30 09:01:41,411 \tReference: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 09:01:41,411 \tHypothesis: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 09:01:41,411 Example #2\n", "2019-12-30 09:01:41,411 \tSource: But freedom from what ?\n", "2019-12-30 09:01:41,412 \tReference: Ẹkẹvuọvo , ẹdia vọ yen egbomọphẹ na che si ayen nu ?\n", "2019-12-30 09:01:41,412 \tHypothesis: Ẹkẹvuọvo , egbomọphẹ vọ yen e vwo ru ?\n", "2019-12-30 09:01:41,412 Example #3\n", "2019-12-30 09:01:41,412 \tSource: Avoid comparing your new congregation with your previous one .\n", "2019-12-30 09:01:41,412 \tReference: Wọ vwẹ ukoko kpokpọ na vwọ vwanvwen ọ rẹ wo nurhe na - a .\n", "2019-12-30 09:01:41,412 \tHypothesis: Wọ vwẹ ukoko kpokpọ vwo ruiruo kpokpọ na vwo ruiruo .\n", "2019-12-30 09:01:41,412 Validation result (greedy) at epoch 21, step 6000: bleu: 10.53, loss: 55914.1914, ppl: 10.6979, duration: 45.6557s\n", "2019-12-30 09:01:56,758 Epoch 21 Step: 6100 Batch Loss: 1.941352 Tokens per Sec: 13590, Lr: 0.000300\n", "2019-12-30 09:02:12,237 Epoch 21 Step: 6200 Batch Loss: 2.024325 Tokens per Sec: 13554, Lr: 0.000300\n", "2019-12-30 09:02:14,864 Epoch 21: total training loss 632.00\n", "2019-12-30 09:02:14,864 EPOCH 22\n", "2019-12-30 09:02:27,722 Epoch 22 Step: 6300 Batch Loss: 1.551368 Tokens per Sec: 13554, Lr: 0.000300\n", "2019-12-30 09:02:42,963 Epoch 22 Step: 6400 Batch Loss: 2.333829 Tokens per Sec: 13384, Lr: 0.000300\n", "2019-12-30 09:02:58,397 Epoch 22 Step: 6500 Batch Loss: 2.065172 Tokens per Sec: 13666, Lr: 0.000300\n", "2019-12-30 09:03:00,227 Epoch 22: total training loss 626.15\n", "2019-12-30 09:03:00,228 EPOCH 23\n", "2019-12-30 09:03:13,941 Epoch 23 Step: 6600 Batch Loss: 1.588990 Tokens per Sec: 13798, Lr: 0.000300\n", "2019-12-30 09:03:29,079 Epoch 23 Step: 6700 Batch Loss: 1.789111 Tokens per Sec: 13188, Lr: 0.000300\n", "2019-12-30 09:03:44,480 Epoch 23 Step: 6800 Batch Loss: 1.948185 Tokens per Sec: 13599, Lr: 0.000300\n", "2019-12-30 09:03:45,534 Epoch 23: total training loss 615.45\n", "2019-12-30 09:03:45,535 EPOCH 24\n", "2019-12-30 09:03:59,838 Epoch 24 Step: 6900 Batch Loss: 1.346794 Tokens per Sec: 13506, Lr: 0.000300\n", "2019-12-30 09:04:15,239 Epoch 24 Step: 7000 Batch Loss: 1.646855 Tokens per Sec: 13539, Lr: 0.000300\n", "2019-12-30 09:05:00,533 Hooray! New best validation result [ppl]!\n", "2019-12-30 09:05:00,533 Saving new checkpoint.\n", "2019-12-30 09:05:00,786 Example #0\n", "2019-12-30 09:05:00,786 \tSource: These orchestral arrangements are composed in such a way that they will prepare our heart and mind for the program to follow .\n", "2019-12-30 09:05:00,786 \tReference: E ru uhworo nana vwẹ idjerhe ro de se muegbe rẹ ubiudu rẹ avwanre hẹrhẹ ọrhuẹrẹphiyọ rẹ ẹdẹ yena .\n", "2019-12-30 09:05:00,786 \tHypothesis: Enana eje djerephia nẹ ayen che se ru obo re chọre , ọ cha nẹrhẹ ayen nene odjekẹ rẹ Baibol na .\n", "2019-12-30 09:05:00,786 Example #1\n", "2019-12-30 09:05:00,787 \tSource: Today he is serving at Bethel .\n", "2019-12-30 09:05:00,787 \tReference: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 09:05:00,787 \tHypothesis: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 09:05:00,787 Example #2\n", "2019-12-30 09:05:00,787 \tSource: But freedom from what ?\n", "2019-12-30 09:05:00,787 \tReference: Ẹkẹvuọvo , ẹdia vọ yen egbomọphẹ na che si ayen nu ?\n", "2019-12-30 09:05:00,787 \tHypothesis: Ẹkẹvuọvo , egbomọphẹ vọ yen a vwọ dia ọmuvwiẹ ?\n", "2019-12-30 09:05:00,787 Example #3\n", "2019-12-30 09:05:00,787 \tSource: Avoid comparing your new congregation with your previous one .\n", "2019-12-30 09:05:00,788 \tReference: Wọ vwẹ ukoko kpokpọ na vwọ vwanvwen ọ rẹ wo nurhe na - a .\n", "2019-12-30 09:05:00,788 \tHypothesis: Wọ vwẹ ukoko kpokpọ vwọ kẹ ukoko wẹn .\n", "2019-12-30 09:05:00,788 Validation result (greedy) at epoch 24, step 7000: bleu: 11.27, loss: 54344.2266, ppl: 10.0092, duration: 45.5484s\n", "2019-12-30 09:05:16,124 Epoch 24 Step: 7100 Batch Loss: 2.186822 Tokens per Sec: 13388, Lr: 0.000300\n", "2019-12-30 09:05:16,570 Epoch 24: total training loss 608.35\n", "2019-12-30 09:05:16,570 EPOCH 25\n", "2019-12-30 09:05:31,387 Epoch 25 Step: 7200 Batch Loss: 2.567016 Tokens per Sec: 13468, Lr: 0.000300\n", "2019-12-30 09:05:46,617 Epoch 25 Step: 7300 Batch Loss: 2.174290 Tokens per Sec: 13288, Lr: 0.000300\n", "2019-12-30 09:06:01,901 Epoch 25 Step: 7400 Batch Loss: 2.307017 Tokens per Sec: 13621, Lr: 0.000300\n", "2019-12-30 09:06:02,058 Epoch 25: total training loss 606.16\n", "2019-12-30 09:06:02,058 EPOCH 26\n", "2019-12-30 09:06:17,257 Epoch 26 Step: 7500 Batch Loss: 2.046378 Tokens per Sec: 13516, Lr: 0.000300\n", "2019-12-30 09:06:32,725 Epoch 26 Step: 7600 Batch Loss: 1.878576 Tokens per Sec: 13513, Lr: 0.000300\n", "2019-12-30 09:06:47,286 Epoch 26: total training loss 586.17\n", "2019-12-30 09:06:47,286 EPOCH 27\n", "2019-12-30 09:06:48,194 Epoch 27 Step: 7700 Batch Loss: 1.911798 Tokens per Sec: 11727, Lr: 0.000300\n", "2019-12-30 09:07:03,324 Epoch 27 Step: 7800 Batch Loss: 1.653833 Tokens per Sec: 13604, Lr: 0.000300\n", "2019-12-30 09:07:18,721 Epoch 27 Step: 7900 Batch Loss: 1.711967 Tokens per Sec: 13586, Lr: 0.000300\n", "2019-12-30 09:07:32,385 Epoch 27: total training loss 586.40\n", "2019-12-30 09:07:32,385 EPOCH 28\n", "2019-12-30 09:07:33,954 Epoch 28 Step: 8000 Batch Loss: 1.357505 Tokens per Sec: 13223, Lr: 0.000300\n", "2019-12-30 09:08:19,193 Hooray! New best validation result [ppl]!\n", "2019-12-30 09:08:19,193 Saving new checkpoint.\n", "2019-12-30 09:08:19,451 Example #0\n", "2019-12-30 09:08:19,451 \tSource: These orchestral arrangements are composed in such a way that they will prepare our heart and mind for the program to follow .\n", "2019-12-30 09:08:19,451 \tReference: E ru uhworo nana vwẹ idjerhe ro de se muegbe rẹ ubiudu rẹ avwanre hẹrhẹ ọrhuẹrẹphiyọ rẹ ẹdẹ yena .\n", "2019-12-30 09:08:19,451 \tHypothesis: Enana eje djerephia nẹ ayen che ru obo re chọre , je cha nẹrhẹ e roro kpahen erọnvwọn rẹ avwanre che ru .\n", "2019-12-30 09:08:19,451 Example #1\n", "2019-12-30 09:08:19,452 \tSource: Today he is serving at Bethel .\n", "2019-12-30 09:08:19,452 \tReference: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 09:08:19,452 \tHypothesis: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 09:08:19,452 Example #2\n", "2019-12-30 09:08:19,452 \tSource: But freedom from what ?\n", "2019-12-30 09:08:19,452 \tReference: Ẹkẹvuọvo , ẹdia vọ yen egbomọphẹ na che si ayen nu ?\n", "2019-12-30 09:08:19,452 \tHypothesis: Ẹkẹvuọvo , egbomọphẹ vọ yen e vwo ru ?\n", "2019-12-30 09:08:19,452 Example #3\n", "2019-12-30 09:08:19,453 \tSource: Avoid comparing your new congregation with your previous one .\n", "2019-12-30 09:08:19,453 \tReference: Wọ vwẹ ukoko kpokpọ na vwọ vwanvwen ọ rẹ wo nurhe na - a .\n", "2019-12-30 09:08:19,453 \tHypothesis: Wọ vwẹ ukoko kpokpọ vwo ruiruo vwẹ ukoko wẹn .\n", "2019-12-30 09:08:19,453 Validation result (greedy) at epoch 28, step 8000: bleu: 11.76, loss: 53979.0195, ppl: 9.8554, duration: 45.4986s\n", "2019-12-30 09:08:34,720 Epoch 28 Step: 8100 Batch Loss: 1.113329 Tokens per Sec: 13629, Lr: 0.000300\n", "2019-12-30 09:08:49,953 Epoch 28 Step: 8200 Batch Loss: 1.761469 Tokens per Sec: 13596, Lr: 0.000300\n", "2019-12-30 09:09:02,966 Epoch 28: total training loss 575.56\n", "2019-12-30 09:09:02,966 EPOCH 29\n", "2019-12-30 09:09:05,324 Epoch 29 Step: 8300 Batch Loss: 0.976917 Tokens per Sec: 13636, Lr: 0.000300\n", "2019-12-30 09:09:20,612 Epoch 29 Step: 8400 Batch Loss: 1.823781 Tokens per Sec: 13505, Lr: 0.000300\n", "2019-12-30 09:09:35,884 Epoch 29 Step: 8500 Batch Loss: 1.808622 Tokens per Sec: 13588, Lr: 0.000300\n", "2019-12-30 09:09:48,230 Epoch 29: total training loss 569.11\n", "2019-12-30 09:09:48,230 EPOCH 30\n", "2019-12-30 09:09:51,228 Epoch 30 Step: 8600 Batch Loss: 1.723362 Tokens per Sec: 13241, Lr: 0.000300\n", "2019-12-30 09:10:06,567 Epoch 30 Step: 8700 Batch Loss: 1.231721 Tokens per Sec: 13475, Lr: 0.000300\n", "2019-12-30 09:10:21,756 Epoch 30 Step: 8800 Batch Loss: 2.209485 Tokens per Sec: 13553, Lr: 0.000300\n", "2019-12-30 09:10:33,750 Epoch 30: total training loss 566.47\n", "2019-12-30 09:10:33,750 EPOCH 31\n", "2019-12-30 09:10:36,994 Epoch 31 Step: 8900 Batch Loss: 1.700474 Tokens per Sec: 13194, Lr: 0.000300\n", "2019-12-30 09:10:52,149 Epoch 31 Step: 9000 Batch Loss: 1.672259 Tokens per Sec: 13595, Lr: 0.000300\n", "2019-12-30 09:11:37,358 Hooray! New best validation result [ppl]!\n", "2019-12-30 09:11:37,358 Saving new checkpoint.\n", "2019-12-30 09:11:37,628 Example #0\n", "2019-12-30 09:11:37,629 \tSource: These orchestral arrangements are composed in such a way that they will prepare our heart and mind for the program to follow .\n", "2019-12-30 09:11:37,629 \tReference: E ru uhworo nana vwẹ idjerhe ro de se muegbe rẹ ubiudu rẹ avwanre hẹrhẹ ọrhuẹrẹphiyọ rẹ ẹdẹ yena .\n", "2019-12-30 09:11:37,629 \tHypothesis: Enana eje usun rẹ erọnvwọn nana re cha nẹrhẹ ayen riẹn iroro rẹ avwanre kpahen ohwohwo .\n", "2019-12-30 09:11:37,629 Example #1\n", "2019-12-30 09:11:37,629 \tSource: Today he is serving at Bethel .\n", "2019-12-30 09:11:37,629 \tReference: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 09:11:37,629 \tHypothesis: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 09:11:37,629 Example #2\n", "2019-12-30 09:11:37,629 \tSource: But freedom from what ?\n", "2019-12-30 09:11:37,629 \tReference: Ẹkẹvuọvo , ẹdia vọ yen egbomọphẹ na che si ayen nu ?\n", "2019-12-30 09:11:37,629 \tHypothesis: Ẹkẹvuọvo , die yen egbomọphẹ ?\n", "2019-12-30 09:11:37,629 Example #3\n", "2019-12-30 09:11:37,629 \tSource: Avoid comparing your new congregation with your previous one .\n", "2019-12-30 09:11:37,629 \tReference: Wọ vwẹ ukoko kpokpọ na vwọ vwanvwen ọ rẹ wo nurhe na - a .\n", "2019-12-30 09:11:37,630 \tHypothesis: Wọ sa vwẹ ukoko kpokpọ vwọ kẹ wẹ vẹ ukoko wẹn .\n", "2019-12-30 09:11:37,630 Validation result (greedy) at epoch 31, step 9000: bleu: 12.13, loss: 52996.8359, ppl: 9.4535, duration: 45.4805s\n", "2019-12-30 09:11:52,946 Epoch 31 Step: 9100 Batch Loss: 1.349487 Tokens per Sec: 13437, Lr: 0.000300\n", "2019-12-30 09:12:04,491 Epoch 31: total training loss 554.32\n", "2019-12-30 09:12:04,491 EPOCH 32\n", "2019-12-30 09:12:08,306 Epoch 32 Step: 9200 Batch Loss: 1.566736 Tokens per Sec: 13314, Lr: 0.000300\n", "2019-12-30 09:12:23,721 Epoch 32 Step: 9300 Batch Loss: 1.961726 Tokens per Sec: 13701, Lr: 0.000300\n", "2019-12-30 09:12:39,080 Epoch 32 Step: 9400 Batch Loss: 2.165437 Tokens per Sec: 13716, Lr: 0.000300\n", "2019-12-30 09:12:49,429 Epoch 32: total training loss 542.02\n", "2019-12-30 09:12:49,430 EPOCH 33\n", "2019-12-30 09:12:54,464 Epoch 33 Step: 9500 Batch Loss: 2.219193 Tokens per Sec: 13723, Lr: 0.000300\n", "2019-12-30 09:13:09,793 Epoch 33 Step: 9600 Batch Loss: 1.923394 Tokens per Sec: 13472, Lr: 0.000300\n", "2019-12-30 09:13:25,054 Epoch 33 Step: 9700 Batch Loss: 1.039957 Tokens per Sec: 13624, Lr: 0.000300\n", "2019-12-30 09:13:34,741 Epoch 33: total training loss 543.23\n", "2019-12-30 09:13:34,742 EPOCH 34\n", "2019-12-30 09:13:40,293 Epoch 34 Step: 9800 Batch Loss: 2.400017 Tokens per Sec: 13566, Lr: 0.000300\n", "2019-12-30 09:13:55,544 Epoch 34 Step: 9900 Batch Loss: 1.858500 Tokens per Sec: 13502, Lr: 0.000300\n", "2019-12-30 09:14:10,858 Epoch 34 Step: 10000 Batch Loss: 2.037014 Tokens per Sec: 13437, Lr: 0.000300\n", "2019-12-30 09:14:56,139 Hooray! New best validation result [ppl]!\n", "2019-12-30 09:14:56,139 Saving new checkpoint.\n", "2019-12-30 09:14:56,411 Example #0\n", "2019-12-30 09:14:56,412 \tSource: These orchestral arrangements are composed in such a way that they will prepare our heart and mind for the program to follow .\n", "2019-12-30 09:14:56,412 \tReference: E ru uhworo nana vwẹ idjerhe ro de se muegbe rẹ ubiudu rẹ avwanre hẹrhẹ ọrhuẹrẹphiyọ rẹ ẹdẹ yena .\n", "2019-12-30 09:14:56,412 \tHypothesis: Enana idjerhe sansan re se vwo ru ọnana .\n", "2019-12-30 09:14:56,412 Example #1\n", "2019-12-30 09:14:56,412 \tSource: Today he is serving at Bethel .\n", "2019-12-30 09:14:56,412 \tReference: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 09:14:56,412 \tHypothesis: Nonẹna , ọ ga vwẹ Bẹtẹl nonẹna .\n", "2019-12-30 09:14:56,412 Example #2\n", "2019-12-30 09:14:56,412 \tSource: But freedom from what ?\n", "2019-12-30 09:14:56,412 \tReference: Ẹkẹvuọvo , ẹdia vọ yen egbomọphẹ na che si ayen nu ?\n", "2019-12-30 09:14:56,412 \tHypothesis: Ẹkẹvuọvo , egbomọphẹ vọ yen rhe ?\n", "2019-12-30 09:14:56,412 Example #3\n", "2019-12-30 09:14:56,413 \tSource: Avoid comparing your new congregation with your previous one .\n", "2019-12-30 09:14:56,413 \tReference: Wọ vwẹ ukoko kpokpọ na vwọ vwanvwen ọ rẹ wo nurhe na - a .\n", "2019-12-30 09:14:56,413 \tHypothesis: Wọ sa vwẹ ukoko wẹn vwọ kẹ wẹ vẹ ukoko wẹn .\n", "2019-12-30 09:14:56,413 Validation result (greedy) at epoch 34, step 10000: bleu: 12.86, loss: 52344.5703, ppl: 9.1958, duration: 45.5540s\n", "2019-12-30 09:15:05,491 Epoch 34: total training loss 534.46\n", "2019-12-30 09:15:05,491 EPOCH 35\n", "2019-12-30 09:15:11,868 Epoch 35 Step: 10100 Batch Loss: 0.987755 Tokens per Sec: 13497, Lr: 0.000300\n", "2019-12-30 09:15:27,271 Epoch 35 Step: 10200 Batch Loss: 1.132742 Tokens per Sec: 13720, Lr: 0.000300\n", "2019-12-30 09:15:42,541 Epoch 35 Step: 10300 Batch Loss: 2.033857 Tokens per Sec: 13522, Lr: 0.000300\n", "2019-12-30 09:15:50,700 Epoch 35: total training loss 529.11\n", "2019-12-30 09:15:50,700 EPOCH 36\n", "2019-12-30 09:15:57,857 Epoch 36 Step: 10400 Batch Loss: 1.221043 Tokens per Sec: 13357, Lr: 0.000300\n", "2019-12-30 09:16:13,177 Epoch 36 Step: 10500 Batch Loss: 2.050667 Tokens per Sec: 13492, Lr: 0.000300\n", "2019-12-30 09:16:28,484 Epoch 36 Step: 10600 Batch Loss: 1.761496 Tokens per Sec: 13360, Lr: 0.000300\n", "2019-12-30 09:16:36,143 Epoch 36: total training loss 523.65\n", "2019-12-30 09:16:36,143 EPOCH 37\n", "2019-12-30 09:16:43,868 Epoch 37 Step: 10700 Batch Loss: 1.421726 Tokens per Sec: 13549, Lr: 0.000300\n", "2019-12-30 09:16:59,165 Epoch 37 Step: 10800 Batch Loss: 1.493693 Tokens per Sec: 13358, Lr: 0.000300\n", "2019-12-30 09:17:14,413 Epoch 37 Step: 10900 Batch Loss: 2.147563 Tokens per Sec: 13506, Lr: 0.000300\n", "2019-12-30 09:17:21,635 Epoch 37: total training loss 520.72\n", "2019-12-30 09:17:21,635 EPOCH 38\n", "2019-12-30 09:17:29,813 Epoch 38 Step: 11000 Batch Loss: 1.998963 Tokens per Sec: 13468, Lr: 0.000300\n", "2019-12-30 09:18:15,067 Hooray! New best validation result [ppl]!\n", "2019-12-30 09:18:15,068 Saving new checkpoint.\n", "2019-12-30 09:18:15,304 Example #0\n", "2019-12-30 09:18:15,305 \tSource: These orchestral arrangements are composed in such a way that they will prepare our heart and mind for the program to follow .\n", "2019-12-30 09:18:15,305 \tReference: E ru uhworo nana vwẹ idjerhe ro de se muegbe rẹ ubiudu rẹ avwanre hẹrhẹ ọrhuẹrẹphiyọ rẹ ẹdẹ yena .\n", "2019-12-30 09:18:15,305 \tHypothesis: Enana evo usun rẹ erọnvwọn re cha nẹrhẹ ayen vwo ẹwẹn rẹ ayen vwo nene odjekẹ rẹ Baibol na .\n", "2019-12-30 09:18:15,305 Example #1\n", "2019-12-30 09:18:15,305 \tSource: Today he is serving at Bethel .\n", "2019-12-30 09:18:15,305 \tReference: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 09:18:15,305 \tHypothesis: Nonẹna , ọ ga vwẹ Bẹtẹl asaọkiephana .\n", "2019-12-30 09:18:15,305 Example #2\n", "2019-12-30 09:18:15,305 \tSource: But freedom from what ?\n", "2019-12-30 09:18:15,305 \tReference: Ẹkẹvuọvo , ẹdia vọ yen egbomọphẹ na che si ayen nu ?\n", "2019-12-30 09:18:15,305 \tHypothesis: Ẹkẹvuọvo , egbomọphẹ vọ yen rhe ?\n", "2019-12-30 09:18:15,305 Example #3\n", "2019-12-30 09:18:15,305 \tSource: Avoid comparing your new congregation with your previous one .\n", "2019-12-30 09:18:15,305 \tReference: Wọ vwẹ ukoko kpokpọ na vwọ vwanvwen ọ rẹ wo nurhe na - a .\n", "2019-12-30 09:18:15,305 \tHypothesis: Wọ sa vwẹ ukẹcha kẹ ukoko wẹn vwo nene uwe yerin kugbe .\n", "2019-12-30 09:18:15,305 Validation result (greedy) at epoch 38, step 11000: bleu: 13.29, loss: 52315.7344, ppl: 9.1845, duration: 45.4920s\n", "2019-12-30 09:18:30,643 Epoch 38 Step: 11100 Batch Loss: 1.933702 Tokens per Sec: 13510, Lr: 0.000300\n", "2019-12-30 09:18:45,904 Epoch 38 Step: 11200 Batch Loss: 1.914126 Tokens per Sec: 13627, Lr: 0.000300\n", "2019-12-30 09:18:52,493 Epoch 38: total training loss 516.01\n", "2019-12-30 09:18:52,493 EPOCH 39\n", "2019-12-30 09:19:01,019 Epoch 39 Step: 11300 Batch Loss: 1.741494 Tokens per Sec: 13738, Lr: 0.000300\n", "2019-12-30 09:19:16,192 Epoch 39 Step: 11400 Batch Loss: 2.037349 Tokens per Sec: 13429, Lr: 0.000300\n", "2019-12-30 09:19:31,505 Epoch 39 Step: 11500 Batch Loss: 1.889197 Tokens per Sec: 13721, Lr: 0.000300\n", "2019-12-30 09:19:37,576 Epoch 39: total training loss 506.51\n", "2019-12-30 09:19:37,576 EPOCH 40\n", "2019-12-30 09:19:46,836 Epoch 40 Step: 11600 Batch Loss: 1.950184 Tokens per Sec: 13856, Lr: 0.000300\n", "2019-12-30 09:20:02,171 Epoch 40 Step: 11700 Batch Loss: 1.756757 Tokens per Sec: 13852, Lr: 0.000300\n", "2019-12-30 09:20:17,264 Epoch 40 Step: 11800 Batch Loss: 1.908435 Tokens per Sec: 13214, Lr: 0.000300\n", "2019-12-30 09:20:22,708 Epoch 40: total training loss 504.16\n", "2019-12-30 09:20:22,708 EPOCH 41\n", "2019-12-30 09:20:32,590 Epoch 41 Step: 11900 Batch Loss: 1.727255 Tokens per Sec: 13542, Lr: 0.000300\n", "2019-12-30 09:20:47,804 Epoch 41 Step: 12000 Batch Loss: 1.705917 Tokens per Sec: 13432, Lr: 0.000300\n", "2019-12-30 09:21:33,032 Hooray! New best validation result [ppl]!\n", "2019-12-30 09:21:33,032 Saving new checkpoint.\n", "2019-12-30 09:21:33,283 Example #0\n", "2019-12-30 09:21:33,284 \tSource: These orchestral arrangements are composed in such a way that they will prepare our heart and mind for the program to follow .\n", "2019-12-30 09:21:33,284 \tReference: E ru uhworo nana vwẹ idjerhe ro de se muegbe rẹ ubiudu rẹ avwanre hẹrhẹ ọrhuẹrẹphiyọ rẹ ẹdẹ yena .\n", "2019-12-30 09:21:33,284 \tHypothesis: Enana idjerhe sansan re se vwo ru ọnana .\n", "2019-12-30 09:21:33,284 Example #1\n", "2019-12-30 09:21:33,284 \tSource: Today he is serving at Bethel .\n", "2019-12-30 09:21:33,284 \tReference: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 09:21:33,284 \tHypothesis: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 09:21:33,284 Example #2\n", "2019-12-30 09:21:33,284 \tSource: But freedom from what ?\n", "2019-12-30 09:21:33,284 \tReference: Ẹkẹvuọvo , ẹdia vọ yen egbomọphẹ na che si ayen nu ?\n", "2019-12-30 09:21:33,284 \tHypothesis: Ẹkẹvuọvo , egbomọphẹ vọ yen rhe ?\n", "2019-12-30 09:21:33,284 Example #3\n", "2019-12-30 09:21:33,285 \tSource: Avoid comparing your new congregation with your previous one .\n", "2019-12-30 09:21:33,285 \tReference: Wọ vwẹ ukoko kpokpọ na vwọ vwanvwen ọ rẹ wo nurhe na - a .\n", "2019-12-30 09:21:33,285 \tHypothesis: Wọ vwẹ ukoko wẹn vwo mu we vwo nene oma wẹn .\n", "2019-12-30 09:21:33,285 Validation result (greedy) at epoch 41, step 12000: bleu: 13.25, loss: 51660.8047, ppl: 8.9331, duration: 45.4798s\n", "2019-12-30 09:21:48,517 Epoch 41 Step: 12100 Batch Loss: 1.312235 Tokens per Sec: 13624, Lr: 0.000300\n", "2019-12-30 09:21:53,562 Epoch 41: total training loss 500.11\n", "2019-12-30 09:21:53,562 EPOCH 42\n", "2019-12-30 09:22:03,918 Epoch 42 Step: 12200 Batch Loss: 1.770682 Tokens per Sec: 13768, Lr: 0.000300\n", "2019-12-30 09:22:19,161 Epoch 42 Step: 12300 Batch Loss: 1.447190 Tokens per Sec: 13382, Lr: 0.000300\n", "2019-12-30 09:22:34,483 Epoch 42 Step: 12400 Batch Loss: 1.306942 Tokens per Sec: 13427, Lr: 0.000300\n", "2019-12-30 09:22:38,703 Epoch 42: total training loss 488.47\n", "2019-12-30 09:22:38,703 EPOCH 43\n", "2019-12-30 09:22:49,861 Epoch 43 Step: 12500 Batch Loss: 1.566188 Tokens per Sec: 13721, Lr: 0.000300\n", "2019-12-30 09:23:05,019 Epoch 43 Step: 12600 Batch Loss: 1.746163 Tokens per Sec: 13269, Lr: 0.000300\n", "2019-12-30 09:23:20,293 Epoch 43 Step: 12700 Batch Loss: 1.756132 Tokens per Sec: 13648, Lr: 0.000300\n", "2019-12-30 09:23:23,851 Epoch 43: total training loss 487.21\n", "2019-12-30 09:23:23,851 EPOCH 44\n", "2019-12-30 09:23:35,615 Epoch 44 Step: 12800 Batch Loss: 1.585692 Tokens per Sec: 13465, Lr: 0.000300\n", "2019-12-30 09:23:50,868 Epoch 44 Step: 12900 Batch Loss: 1.690818 Tokens per Sec: 13637, Lr: 0.000300\n", "2019-12-30 09:24:06,139 Epoch 44 Step: 13000 Batch Loss: 1.793420 Tokens per Sec: 13584, Lr: 0.000300\n", "2019-12-30 09:24:51,385 Hooray! New best validation result [ppl]!\n", "2019-12-30 09:24:51,385 Saving new checkpoint.\n", "2019-12-30 09:24:51,748 Example #0\n", "2019-12-30 09:24:51,749 \tSource: These orchestral arrangements are composed in such a way that they will prepare our heart and mind for the program to follow .\n", "2019-12-30 09:24:51,749 \tReference: E ru uhworo nana vwẹ idjerhe ro de se muegbe rẹ ubiudu rẹ avwanre hẹrhẹ ọrhuẹrẹphiyọ rẹ ẹdẹ yena .\n", "2019-12-30 09:24:51,749 \tHypothesis: Enana idjerhe evo re se vwo ru ọnana womarẹ ota rẹ unu rẹ ihwo re cha nẹrhẹ ẹwẹn avwanre totọ .\n", "2019-12-30 09:24:51,749 Example #1\n", "2019-12-30 09:24:51,749 \tSource: Today he is serving at Bethel .\n", "2019-12-30 09:24:51,749 \tReference: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 09:24:51,749 \tHypothesis: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 09:24:51,749 Example #2\n", "2019-12-30 09:24:51,749 \tSource: But freedom from what ?\n", "2019-12-30 09:24:51,749 \tReference: Ẹkẹvuọvo , ẹdia vọ yen egbomọphẹ na che si ayen nu ?\n", "2019-12-30 09:24:51,749 \tHypothesis: Ẹkẹvuọvo , die yen egbomọphẹ ?\n", "2019-12-30 09:24:51,750 Example #3\n", "2019-12-30 09:24:51,750 \tSource: Avoid comparing your new congregation with your previous one .\n", "2019-12-30 09:24:51,750 \tReference: Wọ vwẹ ukoko kpokpọ na vwọ vwanvwen ọ rẹ wo nurhe na - a .\n", "2019-12-30 09:24:51,750 \tHypothesis: Wọ vwẹ ukoko wẹn vwọ vwanvwe ọ rẹ ohwo kpokpọ na .\n", "2019-12-30 09:24:51,750 Validation result (greedy) at epoch 44, step 13000: bleu: 13.76, loss: 51232.8281, ppl: 8.7725, duration: 45.6111s\n", "2019-12-30 09:24:54,708 Epoch 44: total training loss 484.71\n", "2019-12-30 09:24:54,708 EPOCH 45\n", "2019-12-30 09:25:07,077 Epoch 45 Step: 13100 Batch Loss: 1.659864 Tokens per Sec: 13833, Lr: 0.000300\n", "2019-12-30 09:25:22,129 Epoch 45 Step: 13200 Batch Loss: 0.953821 Tokens per Sec: 13526, Lr: 0.000300\n", "2019-12-30 09:25:37,548 Epoch 45 Step: 13300 Batch Loss: 1.477944 Tokens per Sec: 13490, Lr: 0.000300\n", "2019-12-30 09:25:39,905 Epoch 45: total training loss 478.12\n", "2019-12-30 09:25:39,905 EPOCH 46\n", "2019-12-30 09:25:52,830 Epoch 46 Step: 13400 Batch Loss: 1.246159 Tokens per Sec: 13701, Lr: 0.000300\n", "2019-12-30 09:26:07,893 Epoch 46 Step: 13500 Batch Loss: 1.455083 Tokens per Sec: 13330, Lr: 0.000300\n", "2019-12-30 09:26:23,325 Epoch 46 Step: 13600 Batch Loss: 1.559840 Tokens per Sec: 13783, Lr: 0.000300\n", "2019-12-30 09:26:24,971 Epoch 46: total training loss 472.71\n", "2019-12-30 09:26:24,971 EPOCH 47\n", "2019-12-30 09:26:38,615 Epoch 47 Step: 13700 Batch Loss: 1.666464 Tokens per Sec: 13488, Lr: 0.000300\n", "2019-12-30 09:26:53,830 Epoch 47 Step: 13800 Batch Loss: 1.450549 Tokens per Sec: 13428, Lr: 0.000300\n", "2019-12-30 09:27:09,053 Epoch 47 Step: 13900 Batch Loss: 1.533553 Tokens per Sec: 13480, Lr: 0.000300\n", "2019-12-30 09:27:10,343 Epoch 47: total training loss 472.72\n", "2019-12-30 09:27:10,343 EPOCH 48\n", "2019-12-30 09:27:24,405 Epoch 48 Step: 14000 Batch Loss: 1.284199 Tokens per Sec: 13701, Lr: 0.000300\n", "2019-12-30 09:28:09,671 Example #0\n", "2019-12-30 09:28:09,671 \tSource: These orchestral arrangements are composed in such a way that they will prepare our heart and mind for the program to follow .\n", "2019-12-30 09:28:09,671 \tReference: E ru uhworo nana vwẹ idjerhe ro de se muegbe rẹ ubiudu rẹ avwanre hẹrhẹ ọrhuẹrẹphiyọ rẹ ẹdẹ yena .\n", "2019-12-30 09:28:09,671 \tHypothesis: Enana idjerhe sansan re se vwo ru ọnana womarẹ eta rẹ Baibol na vẹ ẹwẹn rẹ avwanre .\n", "2019-12-30 09:28:09,671 Example #1\n", "2019-12-30 09:28:09,672 \tSource: Today he is serving at Bethel .\n", "2019-12-30 09:28:09,672 \tReference: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 09:28:09,672 \tHypothesis: Nonẹna , ọ ga vwẹ Bẹtẹl nonẹna .\n", "2019-12-30 09:28:09,672 Example #2\n", "2019-12-30 09:28:09,672 \tSource: But freedom from what ?\n", "2019-12-30 09:28:09,672 \tReference: Ẹkẹvuọvo , ẹdia vọ yen egbomọphẹ na che si ayen nu ?\n", "2019-12-30 09:28:09,672 \tHypothesis: Ẹkẹvuọvo , egbomọphẹ vọ yen rhe ?\n", "2019-12-30 09:28:09,672 Example #3\n", "2019-12-30 09:28:09,672 \tSource: Avoid comparing your new congregation with your previous one .\n", "2019-12-30 09:28:09,672 \tReference: Wọ vwẹ ukoko kpokpọ na vwọ vwanvwen ọ rẹ wo nurhe na - a .\n", "2019-12-30 09:28:09,672 \tHypothesis: Wọ vwẹ ukoko kpokpọ vwo mu we vwọ bọn ukoko wẹn gan .\n", "2019-12-30 09:28:09,672 Validation result (greedy) at epoch 48, step 14000: bleu: 14.09, loss: 51388.3633, ppl: 8.8305, duration: 45.2669s\n", "2019-12-30 09:28:25,095 Epoch 48 Step: 14100 Batch Loss: 1.517861 Tokens per Sec: 13538, Lr: 0.000300\n", "2019-12-30 09:28:40,287 Epoch 48 Step: 14200 Batch Loss: 2.108052 Tokens per Sec: 13607, Lr: 0.000300\n", "2019-12-30 09:28:40,603 Epoch 48: total training loss 463.78\n", "2019-12-30 09:28:40,603 EPOCH 49\n", "2019-12-30 09:28:55,837 Epoch 49 Step: 14300 Batch Loss: 1.001992 Tokens per Sec: 13541, Lr: 0.000300\n", "2019-12-30 09:29:11,063 Epoch 49 Step: 14400 Batch Loss: 0.958567 Tokens per Sec: 13461, Lr: 0.000300\n", "2019-12-30 09:29:25,722 Epoch 49: total training loss 455.18\n", "2019-12-30 09:29:25,723 EPOCH 50\n", "2019-12-30 09:29:26,516 Epoch 50 Step: 14500 Batch Loss: 1.616237 Tokens per Sec: 12155, Lr: 0.000300\n", "2019-12-30 09:29:41,782 Epoch 50 Step: 14600 Batch Loss: 0.792938 Tokens per Sec: 13605, Lr: 0.000300\n", "2019-12-30 09:29:57,133 Epoch 50 Step: 14700 Batch Loss: 1.930822 Tokens per Sec: 13655, Lr: 0.000300\n", "2019-12-30 09:30:11,202 Epoch 50: total training loss 460.34\n", "2019-12-30 09:30:11,202 EPOCH 51\n", "2019-12-30 09:30:12,310 Epoch 51 Step: 14800 Batch Loss: 1.229054 Tokens per Sec: 12726, Lr: 0.000300\n", "2019-12-30 09:30:27,589 Epoch 51 Step: 14900 Batch Loss: 1.754501 Tokens per Sec: 13522, Lr: 0.000300\n", "2019-12-30 09:30:42,941 Epoch 51 Step: 15000 Batch Loss: 1.240174 Tokens per Sec: 13669, Lr: 0.000300\n", "2019-12-30 09:31:28,278 Hooray! New best validation result [ppl]!\n", "2019-12-30 09:31:28,278 Saving new checkpoint.\n", "2019-12-30 09:31:28,533 Example #0\n", "2019-12-30 09:31:28,533 \tSource: These orchestral arrangements are composed in such a way that they will prepare our heart and mind for the program to follow .\n", "2019-12-30 09:31:28,533 \tReference: E ru uhworo nana vwẹ idjerhe ro de se muegbe rẹ ubiudu rẹ avwanre hẹrhẹ ọrhuẹrẹphiyọ rẹ ẹdẹ yena .\n", "2019-12-30 09:31:28,533 \tHypothesis: Enana evo usun rẹ erọnvwọn re cha nẹrhẹ ayen karophiyọ nẹ ayen che brorhiẹn rẹ ayen cha vwọ riẹn iroro vẹ uruemu rẹ avwanre .\n", "2019-12-30 09:31:28,533 Example #1\n", "2019-12-30 09:31:28,534 \tSource: Today he is serving at Bethel .\n", "2019-12-30 09:31:28,534 \tReference: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 09:31:28,534 \tHypothesis: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 09:31:28,534 Example #2\n", "2019-12-30 09:31:28,534 \tSource: But freedom from what ?\n", "2019-12-30 09:31:28,534 \tReference: Ẹkẹvuọvo , ẹdia vọ yen egbomọphẹ na che si ayen nu ?\n", "2019-12-30 09:31:28,534 \tHypothesis: Ẹkẹvuọvo , egbomọphẹ vọ yen rhe ?\n", "2019-12-30 09:31:28,534 Example #3\n", "2019-12-30 09:31:28,534 \tSource: Avoid comparing your new congregation with your previous one .\n", "2019-12-30 09:31:28,534 \tReference: Wọ vwẹ ukoko kpokpọ na vwọ vwanvwen ọ rẹ wo nurhe na - a .\n", "2019-12-30 09:31:28,534 \tHypothesis: Wọ vwẹ ukoko wẹn vwọ chọn wẹ uko vwo muegbe rẹ uyono wẹn .\n", "2019-12-30 09:31:28,534 Validation result (greedy) at epoch 51, step 15000: bleu: 14.15, loss: 51209.9375, ppl: 8.7640, duration: 45.5935s\n", "2019-12-30 09:31:42,149 Epoch 51: total training loss 455.41\n", "2019-12-30 09:31:42,149 EPOCH 52\n", "2019-12-30 09:31:43,704 Epoch 52 Step: 15100 Batch Loss: 1.595414 Tokens per Sec: 13582, Lr: 0.000300\n", "2019-12-30 09:31:59,000 Epoch 52 Step: 15200 Batch Loss: 1.565907 Tokens per Sec: 13535, Lr: 0.000300\n", "2019-12-30 09:32:14,215 Epoch 52 Step: 15300 Batch Loss: 1.656942 Tokens per Sec: 13304, Lr: 0.000300\n", "2019-12-30 09:32:27,558 Epoch 52: total training loss 449.79\n", "2019-12-30 09:32:27,558 EPOCH 53\n", "2019-12-30 09:32:29,579 Epoch 53 Step: 15400 Batch Loss: 1.439989 Tokens per Sec: 13282, Lr: 0.000300\n", "2019-12-30 09:32:44,861 Epoch 53 Step: 15500 Batch Loss: 1.291759 Tokens per Sec: 13566, Lr: 0.000300\n", "2019-12-30 09:33:00,176 Epoch 53 Step: 15600 Batch Loss: 1.700023 Tokens per Sec: 13538, Lr: 0.000300\n", "2019-12-30 09:33:12,896 Epoch 53: total training loss 445.23\n", "2019-12-30 09:33:12,896 EPOCH 54\n", "2019-12-30 09:33:15,421 Epoch 54 Step: 15700 Batch Loss: 1.480523 Tokens per Sec: 13256, Lr: 0.000300\n", "2019-12-30 09:33:30,787 Epoch 54 Step: 15800 Batch Loss: 1.556042 Tokens per Sec: 13463, Lr: 0.000300\n", "2019-12-30 09:33:46,132 Epoch 54 Step: 15900 Batch Loss: 1.107126 Tokens per Sec: 13524, Lr: 0.000300\n", "2019-12-30 09:33:58,301 Epoch 54: total training loss 443.60\n", "2019-12-30 09:33:58,302 EPOCH 55\n", "2019-12-30 09:34:01,336 Epoch 55 Step: 16000 Batch Loss: 1.065430 Tokens per Sec: 12624, Lr: 0.000300\n", "2019-12-30 09:34:46,613 Example #0\n", "2019-12-30 09:34:46,614 \tSource: These orchestral arrangements are composed in such a way that they will prepare our heart and mind for the program to follow .\n", "2019-12-30 09:34:46,614 \tReference: E ru uhworo nana vwẹ idjerhe ro de se muegbe rẹ ubiudu rẹ avwanre hẹrhẹ ọrhuẹrẹphiyọ rẹ ẹdẹ yena .\n", "2019-12-30 09:34:46,614 \tHypothesis: Enana idjerhe sansan re se vwo ru ọrhuẹrẹphiyotọ rẹ ayen che vwo nene ubiudu rẹ ihwo efa .\n", "2019-12-30 09:34:46,614 Example #1\n", "2019-12-30 09:34:46,614 \tSource: Today he is serving at Bethel .\n", "2019-12-30 09:34:46,614 \tReference: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 09:34:46,614 \tHypothesis: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 09:34:46,614 Example #2\n", "2019-12-30 09:34:46,614 \tSource: But freedom from what ?\n", "2019-12-30 09:34:46,614 \tReference: Ẹkẹvuọvo , ẹdia vọ yen egbomọphẹ na che si ayen nu ?\n", "2019-12-30 09:34:46,614 \tHypothesis: Ẹkẹvuọvo , egbomọphẹ vọ yen rhe ?\n", "2019-12-30 09:34:46,614 Example #3\n", "2019-12-30 09:34:46,615 \tSource: Avoid comparing your new congregation with your previous one .\n", "2019-12-30 09:34:46,615 \tReference: Wọ vwẹ ukoko kpokpọ na vwọ vwanvwen ọ rẹ wo nurhe na - a .\n", "2019-12-30 09:34:46,615 \tHypothesis: Wọ vwẹ ukoko wẹn vwọ vwanvwen ukoko wẹn .\n", "2019-12-30 09:34:46,615 Validation result (greedy) at epoch 55, step 16000: bleu: 14.42, loss: 51274.9414, ppl: 8.7881, duration: 45.2788s\n", "2019-12-30 09:35:01,965 Epoch 55 Step: 16100 Batch Loss: 1.418494 Tokens per Sec: 13581, Lr: 0.000300\n", "2019-12-30 09:35:17,341 Epoch 55 Step: 16200 Batch Loss: 1.571644 Tokens per Sec: 13563, Lr: 0.000300\n", "2019-12-30 09:35:28,831 Epoch 55: total training loss 435.12\n", "2019-12-30 09:35:28,832 EPOCH 56\n", "2019-12-30 09:35:32,675 Epoch 56 Step: 16300 Batch Loss: 1.894797 Tokens per Sec: 13471, Lr: 0.000300\n", "2019-12-30 09:35:47,941 Epoch 56 Step: 16400 Batch Loss: 1.753034 Tokens per Sec: 13449, Lr: 0.000300\n", "2019-12-30 09:36:03,293 Epoch 56 Step: 16500 Batch Loss: 1.688093 Tokens per Sec: 13619, Lr: 0.000300\n", "2019-12-30 09:36:14,455 Epoch 56: total training loss 436.16\n", "2019-12-30 09:36:14,455 EPOCH 57\n", "2019-12-30 09:36:18,756 Epoch 57 Step: 16600 Batch Loss: 1.780148 Tokens per Sec: 14235, Lr: 0.000300\n", "2019-12-30 09:36:34,046 Epoch 57 Step: 16700 Batch Loss: 1.624623 Tokens per Sec: 13337, Lr: 0.000300\n", "2019-12-30 09:36:49,286 Epoch 57 Step: 16800 Batch Loss: 1.636462 Tokens per Sec: 13592, Lr: 0.000300\n", "2019-12-30 09:36:59,619 Epoch 57: total training loss 429.17\n", "2019-12-30 09:36:59,619 EPOCH 58\n", "2019-12-30 09:37:04,530 Epoch 58 Step: 16900 Batch Loss: 1.647993 Tokens per Sec: 13439, Lr: 0.000300\n", "2019-12-30 09:37:19,873 Epoch 58 Step: 17000 Batch Loss: 1.673022 Tokens per Sec: 13378, Lr: 0.000300\n", "2019-12-30 09:38:05,032 Example #0\n", "2019-12-30 09:38:05,033 \tSource: These orchestral arrangements are composed in such a way that they will prepare our heart and mind for the program to follow .\n", "2019-12-30 09:38:05,033 \tReference: E ru uhworo nana vwẹ idjerhe ro de se muegbe rẹ ubiudu rẹ avwanre hẹrhẹ ọrhuẹrẹphiyọ rẹ ẹdẹ yena .\n", "2019-12-30 09:38:05,033 \tHypothesis: Enana eje cha chọn avwanre uko vwọ riẹn nẹ ayen che nene ayen ta ota kugbe .\n", "2019-12-30 09:38:05,033 Example #1\n", "2019-12-30 09:38:05,033 \tSource: Today he is serving at Bethel .\n", "2019-12-30 09:38:05,033 \tReference: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 09:38:05,033 \tHypothesis: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 09:38:05,033 Example #2\n", "2019-12-30 09:38:05,033 \tSource: But freedom from what ?\n", "2019-12-30 09:38:05,033 \tReference: Ẹkẹvuọvo , ẹdia vọ yen egbomọphẹ na che si ayen nu ?\n", "2019-12-30 09:38:05,033 \tHypothesis: ( Iruo Rẹ Iyinkọn Na 2 : 4 ) Ẹkẹvuọvo , die yen egbomọphẹ ?\n", "2019-12-30 09:38:05,033 Example #3\n", "2019-12-30 09:38:05,034 \tSource: Avoid comparing your new congregation with your previous one .\n", "2019-12-30 09:38:05,034 \tReference: Wọ vwẹ ukoko kpokpọ na vwọ vwanvwen ọ rẹ wo nurhe na - a .\n", "2019-12-30 09:38:05,034 \tHypothesis: Wọ vwẹ ukoko wẹn vwọ chọn wẹ uko vwo vwo vwo uruemu rẹ omaẹfẹnẹ .\n", "2019-12-30 09:38:05,034 Validation result (greedy) at epoch 58, step 17000: bleu: 14.14, loss: 51369.2930, ppl: 8.8234, duration: 45.1601s\n", "2019-12-30 09:38:20,472 Epoch 58 Step: 17100 Batch Loss: 1.542433 Tokens per Sec: 13618, Lr: 0.000300\n", "2019-12-30 09:38:30,008 Epoch 58: total training loss 424.63\n", "2019-12-30 09:38:30,008 EPOCH 59\n", "2019-12-30 09:38:35,783 Epoch 59 Step: 17200 Batch Loss: 1.522534 Tokens per Sec: 13273, Lr: 0.000300\n", "2019-12-30 09:38:51,153 Epoch 59 Step: 17300 Batch Loss: 1.817454 Tokens per Sec: 13772, Lr: 0.000300\n", "2019-12-30 09:39:06,331 Epoch 59 Step: 17400 Batch Loss: 1.859855 Tokens per Sec: 13559, Lr: 0.000300\n", "2019-12-30 09:39:15,035 Epoch 59: total training loss 421.84\n", "2019-12-30 09:39:15,036 EPOCH 60\n", "2019-12-30 09:39:21,701 Epoch 60 Step: 17500 Batch Loss: 1.315588 Tokens per Sec: 13287, Lr: 0.000300\n", "2019-12-30 09:39:36,998 Epoch 60 Step: 17600 Batch Loss: 1.425480 Tokens per Sec: 13541, Lr: 0.000300\n", "2019-12-30 09:39:52,468 Epoch 60 Step: 17700 Batch Loss: 1.358012 Tokens per Sec: 13632, Lr: 0.000300\n", "2019-12-30 09:40:00,167 Epoch 60: total training loss 416.60\n", "2019-12-30 09:40:00,167 EPOCH 61\n", "2019-12-30 09:40:07,709 Epoch 61 Step: 17800 Batch Loss: 1.315994 Tokens per Sec: 13021, Lr: 0.000300\n", "2019-12-30 09:40:22,986 Epoch 61 Step: 17900 Batch Loss: 1.682872 Tokens per Sec: 13565, Lr: 0.000300\n", "2019-12-30 09:40:38,386 Epoch 61 Step: 18000 Batch Loss: 1.624770 Tokens per Sec: 13413, Lr: 0.000300\n", "2019-12-30 09:41:23,626 Hooray! New best validation result [ppl]!\n", "2019-12-30 09:41:23,626 Saving new checkpoint.\n", "2019-12-30 09:41:23,942 Example #0\n", "2019-12-30 09:41:23,942 \tSource: These orchestral arrangements are composed in such a way that they will prepare our heart and mind for the program to follow .\n", "2019-12-30 09:41:23,942 \tReference: E ru uhworo nana vwẹ idjerhe ro de se muegbe rẹ ubiudu rẹ avwanre hẹrhẹ ọrhuẹrẹphiyọ rẹ ẹdẹ yena .\n", "2019-12-30 09:41:23,942 \tHypothesis: Enana evo usun rẹ erọnvwọn re cha chọn avwanre uko vwọ riẹn iroro rẹ avwanre , ji nene odjekẹ rẹ Baibol na .\n", "2019-12-30 09:41:23,942 Example #1\n", "2019-12-30 09:41:23,943 \tSource: Today he is serving at Bethel .\n", "2019-12-30 09:41:23,943 \tReference: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 09:41:23,943 \tHypothesis: Nonẹna , ọ ga vwẹ Bẹtẹl asaọkiephana .\n", "2019-12-30 09:41:23,943 Example #2\n", "2019-12-30 09:41:23,943 \tSource: But freedom from what ?\n", "2019-12-30 09:41:23,943 \tReference: Ẹkẹvuọvo , ẹdia vọ yen egbomọphẹ na che si ayen nu ?\n", "2019-12-30 09:41:23,943 \tHypothesis: Ẹkẹvuọvo , egbomọphẹ vọ yen rhe ?\n", "2019-12-30 09:41:23,943 Example #3\n", "2019-12-30 09:41:23,943 \tSource: Avoid comparing your new congregation with your previous one .\n", "2019-12-30 09:41:23,943 \tReference: Wọ vwẹ ukoko kpokpọ na vwọ vwanvwen ọ rẹ wo nurhe na - a .\n", "2019-12-30 09:41:23,943 \tHypothesis: Wọ vwẹ ukoko wẹn vwọ sẹro rẹ ukoko wẹn .\n", "2019-12-30 09:41:23,943 Validation result (greedy) at epoch 61, step 18000: bleu: 14.77, loss: 51154.0352, ppl: 8.7432, duration: 45.5568s\n", "2019-12-30 09:41:31,232 Epoch 61: total training loss 417.60\n", "2019-12-30 09:41:31,232 EPOCH 62\n", "2019-12-30 09:41:39,463 Epoch 62 Step: 18100 Batch Loss: 1.607777 Tokens per Sec: 13786, Lr: 0.000300\n", "2019-12-30 09:41:54,737 Epoch 62 Step: 18200 Batch Loss: 1.240232 Tokens per Sec: 13385, Lr: 0.000300\n", "2019-12-30 09:42:10,164 Epoch 62 Step: 18300 Batch Loss: 1.489730 Tokens per Sec: 13696, Lr: 0.000300\n", "2019-12-30 09:42:16,301 Epoch 62: total training loss 408.98\n", "2019-12-30 09:42:16,301 EPOCH 63\n", "2019-12-30 09:42:25,454 Epoch 63 Step: 18400 Batch Loss: 1.388070 Tokens per Sec: 13815, Lr: 0.000300\n", "2019-12-30 09:42:40,775 Epoch 63 Step: 18500 Batch Loss: 1.565442 Tokens per Sec: 13210, Lr: 0.000300\n", "2019-12-30 09:42:56,162 Epoch 63 Step: 18600 Batch Loss: 1.410524 Tokens per Sec: 13649, Lr: 0.000300\n", "2019-12-30 09:43:01,480 Epoch 63: total training loss 408.00\n", "2019-12-30 09:43:01,480 EPOCH 64\n", "2019-12-30 09:43:11,628 Epoch 64 Step: 18700 Batch Loss: 1.243400 Tokens per Sec: 13496, Lr: 0.000300\n", "2019-12-30 09:43:26,783 Epoch 64 Step: 18800 Batch Loss: 1.215857 Tokens per Sec: 13440, Lr: 0.000300\n", "2019-12-30 09:43:42,194 Epoch 64 Step: 18900 Batch Loss: 1.539639 Tokens per Sec: 13533, Lr: 0.000300\n", "2019-12-30 09:43:46,650 Epoch 64: total training loss 405.50\n", "2019-12-30 09:43:46,650 EPOCH 65\n", "2019-12-30 09:43:57,671 Epoch 65 Step: 19000 Batch Loss: 1.539200 Tokens per Sec: 13763, Lr: 0.000300\n", "2019-12-30 09:44:42,850 Example #0\n", "2019-12-30 09:44:42,851 \tSource: These orchestral arrangements are composed in such a way that they will prepare our heart and mind for the program to follow .\n", "2019-12-30 09:44:42,851 \tReference: E ru uhworo nana vwẹ idjerhe ro de se muegbe rẹ ubiudu rẹ avwanre hẹrhẹ ọrhuẹrẹphiyọ rẹ ẹdẹ yena .\n", "2019-12-30 09:44:42,851 \tHypothesis: Enana nẹrhẹ e se vwo ẹruọ rẹ ẹdia evo vwẹ idjerhe rẹ ẹwẹn avwanre , ji vwo ẹwẹn ra vwọ nabọ nene odjekẹ rẹ Baibol na .\n", "2019-12-30 09:44:42,851 Example #1\n", "2019-12-30 09:44:42,851 \tSource: Today he is serving at Bethel .\n", "2019-12-30 09:44:42,851 \tReference: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 09:44:42,851 \tHypothesis: Nonẹna , ọ ga vwẹ Bẹtẹl asaọkiephana .\n", "2019-12-30 09:44:42,851 Example #2\n", "2019-12-30 09:44:42,851 \tSource: But freedom from what ?\n", "2019-12-30 09:44:42,851 \tReference: Ẹkẹvuọvo , ẹdia vọ yen egbomọphẹ na che si ayen nu ?\n", "2019-12-30 09:44:42,851 \tHypothesis: ( 1 Pita 3 : 1 ) Ẹkẹvuọvo , die yen egbomọphẹ ?\n", "2019-12-30 09:44:42,851 Example #3\n", "2019-12-30 09:44:42,851 \tSource: Avoid comparing your new congregation with your previous one .\n", "2019-12-30 09:44:42,851 \tReference: Wọ vwẹ ukoko kpokpọ na vwọ vwanvwen ọ rẹ wo nurhe na - a .\n", "2019-12-30 09:44:42,852 \tHypothesis: Wọ vwẹ ukoko wẹn vwọ jehwẹ , wọ me je vuẹ ohwo ọvo nẹ o vwo ukoko wẹn .\n", "2019-12-30 09:44:42,852 Validation result (greedy) at epoch 65, step 19000: bleu: 14.83, loss: 51345.7578, ppl: 8.8146, duration: 45.1803s\n", "2019-12-30 09:44:58,047 Epoch 65 Step: 19100 Batch Loss: 1.577752 Tokens per Sec: 13544, Lr: 0.000300\n", "2019-12-30 09:45:13,327 Epoch 65 Step: 19200 Batch Loss: 1.572376 Tokens per Sec: 13446, Lr: 0.000300\n", "2019-12-30 09:45:16,878 Epoch 65: total training loss 401.49\n", "2019-12-30 09:45:16,878 EPOCH 66\n", "2019-12-30 09:45:28,726 Epoch 66 Step: 19300 Batch Loss: 1.483958 Tokens per Sec: 13378, Lr: 0.000300\n", "2019-12-30 09:45:44,194 Epoch 66 Step: 19400 Batch Loss: 1.596878 Tokens per Sec: 13702, Lr: 0.000300\n", "2019-12-30 09:45:59,603 Epoch 66 Step: 19500 Batch Loss: 1.507257 Tokens per Sec: 13509, Lr: 0.000300\n", "2019-12-30 09:46:02,281 Epoch 66: total training loss 399.60\n", "2019-12-30 09:46:02,282 EPOCH 67\n", "2019-12-30 09:46:14,848 Epoch 67 Step: 19600 Batch Loss: 1.593485 Tokens per Sec: 13470, Lr: 0.000300\n", "2019-12-30 09:46:30,152 Epoch 67 Step: 19700 Batch Loss: 1.533468 Tokens per Sec: 13625, Lr: 0.000300\n", "2019-12-30 09:46:45,292 Epoch 67 Step: 19800 Batch Loss: 1.543039 Tokens per Sec: 13264, Lr: 0.000300\n", "2019-12-30 09:46:47,747 Epoch 67: total training loss 400.63\n", "2019-12-30 09:46:47,747 EPOCH 68\n", "2019-12-30 09:47:00,707 Epoch 68 Step: 19900 Batch Loss: 0.936133 Tokens per Sec: 13711, Lr: 0.000300\n", "2019-12-30 09:47:15,826 Epoch 68 Step: 20000 Batch Loss: 1.668921 Tokens per Sec: 13268, Lr: 0.000300\n", "2019-12-30 09:48:01,083 Example #0\n", "2019-12-30 09:48:01,083 \tSource: These orchestral arrangements are composed in such a way that they will prepare our heart and mind for the program to follow .\n", "2019-12-30 09:48:01,083 \tReference: E ru uhworo nana vwẹ idjerhe ro de se muegbe rẹ ubiudu rẹ avwanre hẹrhẹ ọrhuẹrẹphiyọ rẹ ẹdẹ yena .\n", "2019-12-30 09:48:01,083 \tHypothesis: Enana idjerhe evo re se vwo muegbe rẹ ikuegbe rẹ oborẹ ayen che ru , je karophiyọ ayen .\n", "2019-12-30 09:48:01,083 Example #1\n", "2019-12-30 09:48:01,084 \tSource: Today he is serving at Bethel .\n", "2019-12-30 09:48:01,084 \tReference: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 09:48:01,084 \tHypothesis: Nonẹna , ọ ga vwẹ Bẹtẹl asaọkiephana .\n", "2019-12-30 09:48:01,084 Example #2\n", "2019-12-30 09:48:01,084 \tSource: But freedom from what ?\n", "2019-12-30 09:48:01,084 \tReference: Ẹkẹvuọvo , ẹdia vọ yen egbomọphẹ na che si ayen nu ?\n", "2019-12-30 09:48:01,084 \tHypothesis: Ẹkẹvuọvo , egbomọphẹ vọ yen rhe ?\n", "2019-12-30 09:48:01,084 Example #3\n", "2019-12-30 09:48:01,084 \tSource: Avoid comparing your new congregation with your previous one .\n", "2019-12-30 09:48:01,084 \tReference: Wọ vwẹ ukoko kpokpọ na vwọ vwanvwen ọ rẹ wo nurhe na - a .\n", "2019-12-30 09:48:01,084 \tHypothesis: Wọ vwẹ ukoko wẹn vwọ kpahotọ kẹ ekogho wẹn .\n", "2019-12-30 09:48:01,084 Validation result (greedy) at epoch 68, step 20000: bleu: 15.02, loss: 51586.1914, ppl: 8.9049, duration: 45.2584s\n", "2019-12-30 09:48:16,612 Epoch 68 Step: 20100 Batch Loss: 1.758633 Tokens per Sec: 13751, Lr: 0.000300\n", "2019-12-30 09:48:18,345 Epoch 68: total training loss 396.32\n", "2019-12-30 09:48:18,345 EPOCH 69\n", "2019-12-30 09:48:31,726 Epoch 69 Step: 20200 Batch Loss: 1.234549 Tokens per Sec: 13480, Lr: 0.000300\n", "2019-12-30 09:48:47,055 Epoch 69 Step: 20300 Batch Loss: 1.265672 Tokens per Sec: 13545, Lr: 0.000300\n", "2019-12-30 09:49:02,373 Epoch 69 Step: 20400 Batch Loss: 1.461195 Tokens per Sec: 13707, Lr: 0.000300\n", "2019-12-30 09:49:03,525 Epoch 69: total training loss 391.26\n", "2019-12-30 09:49:03,525 EPOCH 70\n", "2019-12-30 09:49:17,584 Epoch 70 Step: 20500 Batch Loss: 1.438060 Tokens per Sec: 13534, Lr: 0.000300\n", "2019-12-30 09:49:32,853 Epoch 70 Step: 20600 Batch Loss: 1.805272 Tokens per Sec: 13578, Lr: 0.000300\n", "2019-12-30 09:49:48,133 Epoch 70 Step: 20700 Batch Loss: 1.765993 Tokens per Sec: 13559, Lr: 0.000300\n", "2019-12-30 09:49:48,738 Epoch 70: total training loss 389.40\n", "2019-12-30 09:49:48,738 EPOCH 71\n", "2019-12-30 09:50:03,370 Epoch 71 Step: 20800 Batch Loss: 1.600935 Tokens per Sec: 13500, Lr: 0.000300\n", "2019-12-30 09:50:18,663 Epoch 71 Step: 20900 Batch Loss: 1.251562 Tokens per Sec: 13604, Lr: 0.000300\n", "2019-12-30 09:50:33,983 Epoch 71 Step: 21000 Batch Loss: 1.321347 Tokens per Sec: 13487, Lr: 0.000300\n", "2019-12-30 09:51:19,143 Example #0\n", "2019-12-30 09:51:19,143 \tSource: These orchestral arrangements are composed in such a way that they will prepare our heart and mind for the program to follow .\n", "2019-12-30 09:51:19,143 \tReference: E ru uhworo nana vwẹ idjerhe ro de se muegbe rẹ ubiudu rẹ avwanre hẹrhẹ ọrhuẹrẹphiyọ rẹ ẹdẹ yena .\n", "2019-12-30 09:51:19,143 \tHypothesis: Enana evo usun rẹ erọnvwọn nana re cha nẹrhẹ a riẹn nẹ ayen che muegbe rẹ ubiudu rẹ ohwo .\n", "2019-12-30 09:51:19,143 Example #1\n", "2019-12-30 09:51:19,143 \tSource: Today he is serving at Bethel .\n", "2019-12-30 09:51:19,143 \tReference: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 09:51:19,143 \tHypothesis: Nonẹna , ọ ga vwẹ Bẹtẹl asaọkiephana .\n", "2019-12-30 09:51:19,143 Example #2\n", "2019-12-30 09:51:19,143 \tSource: But freedom from what ?\n", "2019-12-30 09:51:19,143 \tReference: Ẹkẹvuọvo , ẹdia vọ yen egbomọphẹ na che si ayen nu ?\n", "2019-12-30 09:51:19,143 \tHypothesis: Ẹkẹvuọvo , egbomọphẹ vọ yen rhe ?\n", "2019-12-30 09:51:19,143 Example #3\n", "2019-12-30 09:51:19,144 \tSource: Avoid comparing your new congregation with your previous one .\n", "2019-12-30 09:51:19,144 \tReference: Wọ vwẹ ukoko kpokpọ na vwọ vwanvwen ọ rẹ wo nurhe na - a .\n", "2019-12-30 09:51:19,144 \tHypothesis: Wọ vwẹ ukoko kpokpọ vwo mu we vẹ ohwo ọfa .\n", "2019-12-30 09:51:19,144 Validation result (greedy) at epoch 71, step 21000: bleu: 14.85, loss: 51528.2109, ppl: 8.8830, duration: 45.1599s\n", "2019-12-30 09:51:19,145 Epoch 71: total training loss 386.79\n", "2019-12-30 09:51:19,145 EPOCH 72\n", "2019-12-30 09:51:34,594 Epoch 72 Step: 21100 Batch Loss: 1.358493 Tokens per Sec: 13776, Lr: 0.000300\n", "2019-12-30 09:51:49,840 Epoch 72 Step: 21200 Batch Loss: 1.562074 Tokens per Sec: 13472, Lr: 0.000300\n", "2019-12-30 09:52:04,248 Epoch 72: total training loss 381.24\n", "2019-12-30 09:52:04,249 EPOCH 73\n", "2019-12-30 09:52:05,056 Epoch 73 Step: 21300 Batch Loss: 1.473367 Tokens per Sec: 12795, Lr: 0.000300\n", "2019-12-30 09:52:20,299 Epoch 73 Step: 21400 Batch Loss: 1.444798 Tokens per Sec: 13647, Lr: 0.000300\n", "2019-12-30 09:52:35,633 Epoch 73 Step: 21500 Batch Loss: 1.623643 Tokens per Sec: 13725, Lr: 0.000300\n", "2019-12-30 09:52:49,226 Epoch 73: total training loss 379.96\n", "2019-12-30 09:52:49,226 EPOCH 74\n", "2019-12-30 09:52:50,790 Epoch 74 Step: 21600 Batch Loss: 1.125688 Tokens per Sec: 11562, Lr: 0.000300\n", "2019-12-30 09:53:06,108 Epoch 74 Step: 21700 Batch Loss: 0.807849 Tokens per Sec: 13354, Lr: 0.000300\n", "2019-12-30 09:53:21,565 Epoch 74 Step: 21800 Batch Loss: 0.949176 Tokens per Sec: 13767, Lr: 0.000300\n", "2019-12-30 09:53:34,584 Epoch 74: total training loss 379.18\n", "2019-12-30 09:53:34,584 EPOCH 75\n", "2019-12-30 09:53:36,909 Epoch 75 Step: 21900 Batch Loss: 1.198095 Tokens per Sec: 12996, Lr: 0.000300\n", "2019-12-30 09:53:52,187 Epoch 75 Step: 22000 Batch Loss: 1.293452 Tokens per Sec: 13631, Lr: 0.000300\n", "2019-12-30 09:54:37,443 Example #0\n", "2019-12-30 09:54:37,444 \tSource: These orchestral arrangements are composed in such a way that they will prepare our heart and mind for the program to follow .\n", "2019-12-30 09:54:37,444 \tReference: E ru uhworo nana vwẹ idjerhe ro de se muegbe rẹ ubiudu rẹ avwanre hẹrhẹ ọrhuẹrẹphiyọ rẹ ẹdẹ yena .\n", "2019-12-30 09:54:37,444 \tHypothesis: Enana evo usun rẹ erọnvwọn re cha nẹrhẹ ayen brorhiẹn rẹ oborẹ ẹwẹn avwanre che kuọrọn , je karophiyọ nẹ ayen che nene ọrhuẹrẹphiyotọ rẹ avwanre .\n", "2019-12-30 09:54:37,444 Example #1\n", "2019-12-30 09:54:37,444 \tSource: Today he is serving at Bethel .\n", "2019-12-30 09:54:37,444 \tReference: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 09:54:37,444 \tHypothesis: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 09:54:37,444 Example #2\n", "2019-12-30 09:54:37,444 \tSource: But freedom from what ?\n", "2019-12-30 09:54:37,444 \tReference: Ẹkẹvuọvo , ẹdia vọ yen egbomọphẹ na che si ayen nu ?\n", "2019-12-30 09:54:37,445 \tHypothesis: Ẹkẹvuọvo , egbomọphẹ vọ yen rhe ?\n", "2019-12-30 09:54:37,445 Example #3\n", "2019-12-30 09:54:37,445 \tSource: Avoid comparing your new congregation with your previous one .\n", "2019-12-30 09:54:37,445 \tReference: Wọ vwẹ ukoko kpokpọ na vwọ vwanvwen ọ rẹ wo nurhe na - a .\n", "2019-12-30 09:54:37,445 \tHypothesis: Wọ vwẹ ukoko wẹn vwo mu rere wo se vwo vwo vwo oyerinkugbe vẹ Jihova .\n", "2019-12-30 09:54:37,445 Validation result (greedy) at epoch 75, step 22000: bleu: 15.32, loss: 52141.0000, ppl: 9.1168, duration: 45.2582s\n", "2019-12-30 09:54:52,679 Epoch 75 Step: 22100 Batch Loss: 1.474287 Tokens per Sec: 13590, Lr: 0.000300\n", "2019-12-30 09:55:05,016 Epoch 75: total training loss 375.80\n", "2019-12-30 09:55:05,017 EPOCH 76\n", "2019-12-30 09:55:07,998 Epoch 76 Step: 22200 Batch Loss: 1.414726 Tokens per Sec: 13263, Lr: 0.000300\n", "2019-12-30 09:55:23,146 Epoch 76 Step: 22300 Batch Loss: 0.682728 Tokens per Sec: 13549, Lr: 0.000300\n", "2019-12-30 09:55:38,582 Epoch 76 Step: 22400 Batch Loss: 0.557675 Tokens per Sec: 13586, Lr: 0.000300\n", "2019-12-30 09:55:50,111 Epoch 76: total training loss 373.01\n", "2019-12-30 09:55:50,111 EPOCH 77\n", "2019-12-30 09:55:53,928 Epoch 77 Step: 22500 Batch Loss: 1.344467 Tokens per Sec: 13734, Lr: 0.000300\n", "2019-12-30 09:56:09,222 Epoch 77 Step: 22600 Batch Loss: 1.150862 Tokens per Sec: 13407, Lr: 0.000300\n", "2019-12-30 09:56:24,436 Epoch 77 Step: 22700 Batch Loss: 1.389134 Tokens per Sec: 13574, Lr: 0.000300\n", "2019-12-30 09:56:35,158 Epoch 77: total training loss 369.61\n", "2019-12-30 09:56:35,158 EPOCH 78\n", "2019-12-30 09:56:39,812 Epoch 78 Step: 22800 Batch Loss: 0.962061 Tokens per Sec: 13922, Lr: 0.000300\n", "2019-12-30 09:56:54,888 Epoch 78 Step: 22900 Batch Loss: 1.071824 Tokens per Sec: 13243, Lr: 0.000300\n", "2019-12-30 09:57:10,224 Epoch 78 Step: 23000 Batch Loss: 0.954888 Tokens per Sec: 13749, Lr: 0.000300\n", "2019-12-30 09:57:55,453 Example #0\n", "2019-12-30 09:57:55,453 \tSource: These orchestral arrangements are composed in such a way that they will prepare our heart and mind for the program to follow .\n", "2019-12-30 09:57:55,453 \tReference: E ru uhworo nana vwẹ idjerhe ro de se muegbe rẹ ubiudu rẹ avwanre hẹrhẹ ọrhuẹrẹphiyọ rẹ ẹdẹ yena .\n", "2019-12-30 09:57:55,453 \tHypothesis: Enana evo usun rẹ erọnvwọn re cha chọn avwanre uko vwo vwo ẹruọ rẹ ẹdia rẹ ẹwẹn vẹ iroro rẹ avwanre .\n", "2019-12-30 09:57:55,453 Example #1\n", "2019-12-30 09:57:55,453 \tSource: Today he is serving at Bethel .\n", "2019-12-30 09:57:55,453 \tReference: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 09:57:55,453 \tHypothesis: Asaọkiephana , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 09:57:55,453 Example #2\n", "2019-12-30 09:57:55,454 \tSource: But freedom from what ?\n", "2019-12-30 09:57:55,454 \tReference: Ẹkẹvuọvo , ẹdia vọ yen egbomọphẹ na che si ayen nu ?\n", "2019-12-30 09:57:55,454 \tHypothesis: Ẹkẹvuọvo , egbomọphẹ vọ yen rhe ?\n", "2019-12-30 09:57:55,454 Example #3\n", "2019-12-30 09:57:55,454 \tSource: Avoid comparing your new congregation with your previous one .\n", "2019-12-30 09:57:55,454 \tReference: Wọ vwẹ ukoko kpokpọ na vwọ vwanvwen ọ rẹ wo nurhe na - a .\n", "2019-12-30 09:57:55,454 \tHypothesis: Wọ vwẹ ukoko kpokpọ wẹn vwo dje we phia .\n", "2019-12-30 09:57:55,454 Validation result (greedy) at epoch 78, step 23000: bleu: 15.24, loss: 51979.7852, ppl: 9.0547, duration: 45.2300s\n", "2019-12-30 09:58:05,556 Epoch 78: total training loss 369.51\n", "2019-12-30 09:58:05,556 EPOCH 79\n", "2019-12-30 09:58:10,857 Epoch 79 Step: 23100 Batch Loss: 1.015360 Tokens per Sec: 14019, Lr: 0.000300\n", "2019-12-30 09:58:26,215 Epoch 79 Step: 23200 Batch Loss: 1.270459 Tokens per Sec: 13537, Lr: 0.000300\n", "2019-12-30 09:58:41,525 Epoch 79 Step: 23300 Batch Loss: 1.571929 Tokens per Sec: 13608, Lr: 0.000300\n", "2019-12-30 09:58:50,557 Epoch 79: total training loss 361.85\n", "2019-12-30 09:58:50,557 EPOCH 80\n", "2019-12-30 09:58:56,840 Epoch 80 Step: 23400 Batch Loss: 1.293837 Tokens per Sec: 13785, Lr: 0.000300\n", "2019-12-30 09:59:12,044 Epoch 80 Step: 23500 Batch Loss: 1.602674 Tokens per Sec: 13521, Lr: 0.000300\n", "2019-12-30 09:59:27,285 Epoch 80 Step: 23600 Batch Loss: 1.476533 Tokens per Sec: 13550, Lr: 0.000300\n", "2019-12-30 09:59:35,721 Epoch 80: total training loss 363.27\n", "2019-12-30 09:59:35,722 EPOCH 81\n", "2019-12-30 09:59:42,431 Epoch 81 Step: 23700 Batch Loss: 1.502211 Tokens per Sec: 13462, Lr: 0.000300\n", "2019-12-30 09:59:57,714 Epoch 81 Step: 23800 Batch Loss: 1.477458 Tokens per Sec: 13650, Lr: 0.000300\n", "2019-12-30 10:00:12,847 Epoch 81 Step: 23900 Batch Loss: 1.628601 Tokens per Sec: 13454, Lr: 0.000300\n", "2019-12-30 10:00:20,800 Epoch 81: total training loss 366.27\n", "2019-12-30 10:00:20,800 EPOCH 82\n", "2019-12-30 10:00:28,156 Epoch 82 Step: 24000 Batch Loss: 1.386370 Tokens per Sec: 13594, Lr: 0.000300\n", "2019-12-30 10:01:13,333 Example #0\n", "2019-12-30 10:01:13,333 \tSource: These orchestral arrangements are composed in such a way that they will prepare our heart and mind for the program to follow .\n", "2019-12-30 10:01:13,333 \tReference: E ru uhworo nana vwẹ idjerhe ro de se muegbe rẹ ubiudu rẹ avwanre hẹrhẹ ọrhuẹrẹphiyọ rẹ ẹdẹ yena .\n", "2019-12-30 10:01:13,333 \tHypothesis: Enana evo usun rẹ erọnvwọn re cha nẹrhẹ a riẹn nẹ ayen che ru ubiudu avwanre siẹrẹ e de nene odjekẹ rẹ Baibol na .\n", "2019-12-30 10:01:13,333 Example #1\n", "2019-12-30 10:01:13,333 \tSource: Today he is serving at Bethel .\n", "2019-12-30 10:01:13,333 \tReference: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 10:01:13,334 \tHypothesis: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 10:01:13,334 Example #2\n", "2019-12-30 10:01:13,334 \tSource: But freedom from what ?\n", "2019-12-30 10:01:13,334 \tReference: Ẹkẹvuọvo , ẹdia vọ yen egbomọphẹ na che si ayen nu ?\n", "2019-12-30 10:01:13,334 \tHypothesis: Ẹkẹvuọvo , die yen egbomọphẹ ?\n", "2019-12-30 10:01:13,334 Example #3\n", "2019-12-30 10:01:13,334 \tSource: Avoid comparing your new congregation with your previous one .\n", "2019-12-30 10:01:13,334 \tReference: Wọ vwẹ ukoko kpokpọ na vwọ vwanvwen ọ rẹ wo nurhe na - a .\n", "2019-12-30 10:01:13,334 \tHypothesis: Wọ vwẹ ukoko kpokpọ vwo dje we phia - a .\n", "2019-12-30 10:01:13,334 Validation result (greedy) at epoch 82, step 24000: bleu: 15.39, loss: 52359.1602, ppl: 9.2014, duration: 45.1782s\n", "2019-12-30 10:01:28,768 Epoch 82 Step: 24100 Batch Loss: 1.336250 Tokens per Sec: 13385, Lr: 0.000210\n", "2019-12-30 10:01:43,986 Epoch 82 Step: 24200 Batch Loss: 1.106351 Tokens per Sec: 13467, Lr: 0.000210\n", "2019-12-30 10:01:51,325 Epoch 82: total training loss 351.36\n", "2019-12-30 10:01:51,326 EPOCH 83\n", "2019-12-30 10:01:59,277 Epoch 83 Step: 24300 Batch Loss: 1.050732 Tokens per Sec: 13201, Lr: 0.000210\n", "2019-12-30 10:02:14,663 Epoch 83 Step: 24400 Batch Loss: 1.382987 Tokens per Sec: 13633, Lr: 0.000210\n", "2019-12-30 10:02:29,983 Epoch 83 Step: 24500 Batch Loss: 1.137238 Tokens per Sec: 13611, Lr: 0.000210\n", "2019-12-30 10:02:36,544 Epoch 83: total training loss 346.29\n", "2019-12-30 10:02:36,544 EPOCH 84\n", "2019-12-30 10:02:45,290 Epoch 84 Step: 24600 Batch Loss: 1.146257 Tokens per Sec: 13235, Lr: 0.000210\n", "2019-12-30 10:03:00,582 Epoch 84 Step: 24700 Batch Loss: 0.772719 Tokens per Sec: 13572, Lr: 0.000210\n", "2019-12-30 10:03:15,975 Epoch 84 Step: 24800 Batch Loss: 0.928637 Tokens per Sec: 13524, Lr: 0.000210\n", "2019-12-30 10:03:21,976 Epoch 84: total training loss 344.80\n", "2019-12-30 10:03:21,976 EPOCH 85\n", "2019-12-30 10:03:31,323 Epoch 85 Step: 24900 Batch Loss: 0.708931 Tokens per Sec: 13171, Lr: 0.000210\n", "2019-12-30 10:03:46,765 Epoch 85 Step: 25000 Batch Loss: 1.338170 Tokens per Sec: 13756, Lr: 0.000210\n", "2019-12-30 10:04:31,966 Example #0\n", "2019-12-30 10:04:31,966 \tSource: These orchestral arrangements are composed in such a way that they will prepare our heart and mind for the program to follow .\n", "2019-12-30 10:04:31,966 \tReference: E ru uhworo nana vwẹ idjerhe ro de se muegbe rẹ ubiudu rẹ avwanre hẹrhẹ ọrhuẹrẹphiyọ rẹ ẹdẹ yena .\n", "2019-12-30 10:04:31,966 \tHypothesis: Enana ghwa iwan rẹ erọnvwọn nana che shephiyọ vẹ ẹwẹn rẹ ayen vwo ru obo rehẹ ubiudu rẹ avwanre .\n", "2019-12-30 10:04:31,966 Example #1\n", "2019-12-30 10:04:31,966 \tSource: Today he is serving at Bethel .\n", "2019-12-30 10:04:31,966 \tReference: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 10:04:31,966 \tHypothesis: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 10:04:31,966 Example #2\n", "2019-12-30 10:04:31,967 \tSource: But freedom from what ?\n", "2019-12-30 10:04:31,967 \tReference: Ẹkẹvuọvo , ẹdia vọ yen egbomọphẹ na che si ayen nu ?\n", "2019-12-30 10:04:31,967 \tHypothesis: Ẹkẹvuọvo , egbomọphẹ vọ yen che nẹ obuko rọyen rhe ?\n", "2019-12-30 10:04:31,967 Example #3\n", "2019-12-30 10:04:31,967 \tSource: Avoid comparing your new congregation with your previous one .\n", "2019-12-30 10:04:31,967 \tReference: Wọ vwẹ ukoko kpokpọ na vwọ vwanvwen ọ rẹ wo nurhe na - a .\n", "2019-12-30 10:04:31,967 \tHypothesis: Wọ vwẹ ukoko wẹn vwo dje ihwo kpokpọ phia .\n", "2019-12-30 10:04:31,967 Validation result (greedy) at epoch 85, step 25000: bleu: 15.69, loss: 52419.0703, ppl: 9.2248, duration: 45.2021s\n", "2019-12-30 10:04:47,267 Epoch 85 Step: 25100 Batch Loss: 1.163263 Tokens per Sec: 13278, Lr: 0.000210\n", "2019-12-30 10:04:52,528 Epoch 85: total training loss 339.87\n", "2019-12-30 10:04:52,529 EPOCH 86\n", "2019-12-30 10:05:02,671 Epoch 86 Step: 25200 Batch Loss: 0.708688 Tokens per Sec: 13333, Lr: 0.000210\n", "2019-12-30 10:05:17,833 Epoch 86 Step: 25300 Batch Loss: 1.126011 Tokens per Sec: 13570, Lr: 0.000210\n", "2019-12-30 10:05:33,281 Epoch 86 Step: 25400 Batch Loss: 1.302358 Tokens per Sec: 13530, Lr: 0.000210\n", "2019-12-30 10:05:37,838 Epoch 86: total training loss 338.60\n", "2019-12-30 10:05:37,838 EPOCH 87\n", "2019-12-30 10:05:48,676 Epoch 87 Step: 25500 Batch Loss: 0.896461 Tokens per Sec: 13555, Lr: 0.000210\n", "2019-12-30 10:06:03,853 Epoch 87 Step: 25600 Batch Loss: 1.344859 Tokens per Sec: 13441, Lr: 0.000210\n", "2019-12-30 10:06:19,325 Epoch 87 Step: 25700 Batch Loss: 0.854214 Tokens per Sec: 13922, Lr: 0.000210\n", "2019-12-30 10:06:23,022 Epoch 87: total training loss 336.94\n", "2019-12-30 10:06:23,022 EPOCH 88\n", "2019-12-30 10:06:34,403 Epoch 88 Step: 25800 Batch Loss: 0.838229 Tokens per Sec: 13359, Lr: 0.000210\n", "2019-12-30 10:06:49,948 Epoch 88 Step: 25900 Batch Loss: 1.274763 Tokens per Sec: 13573, Lr: 0.000210\n", "2019-12-30 10:07:05,079 Epoch 88 Step: 26000 Batch Loss: 1.357083 Tokens per Sec: 13516, Lr: 0.000210\n", "2019-12-30 10:07:50,331 Example #0\n", "2019-12-30 10:07:50,331 \tSource: These orchestral arrangements are composed in such a way that they will prepare our heart and mind for the program to follow .\n", "2019-12-30 10:07:50,331 \tReference: E ru uhworo nana vwẹ idjerhe ro de se muegbe rẹ ubiudu rẹ avwanre hẹrhẹ ọrhuẹrẹphiyọ rẹ ẹdẹ yena .\n", "2019-12-30 10:07:50,331 \tHypothesis: Enana evo usun rẹ ẹdia rẹ ayen cha dia vwẹ idjerhe rẹ ẹwẹn avwanre vwo nene iroro rẹ avwanre .\n", "2019-12-30 10:07:50,331 Example #1\n", "2019-12-30 10:07:50,332 \tSource: Today he is serving at Bethel .\n", "2019-12-30 10:07:50,332 \tReference: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 10:07:50,332 \tHypothesis: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 10:07:50,332 Example #2\n", "2019-12-30 10:07:50,332 \tSource: But freedom from what ?\n", "2019-12-30 10:07:50,332 \tReference: Ẹkẹvuọvo , ẹdia vọ yen egbomọphẹ na che si ayen nu ?\n", "2019-12-30 10:07:50,332 \tHypothesis: ( Rom 5 : 11 ) Ẹkẹvuọvo , egbomọphẹ vọ yen nẹ obuko rọyen rhe ?\n", "2019-12-30 10:07:50,332 Example #3\n", "2019-12-30 10:07:50,332 \tSource: Avoid comparing your new congregation with your previous one .\n", "2019-12-30 10:07:50,332 \tReference: Wọ vwẹ ukoko kpokpọ na vwọ vwanvwen ọ rẹ wo nurhe na - a .\n", "2019-12-30 10:07:50,332 \tHypothesis: Wọ vwẹ ukoko wẹn vwo mu , je vuẹ ohwo ọvo nẹ ọ dia ọvo usun rẹ iniọvo wẹn .\n", "2019-12-30 10:07:50,332 Validation result (greedy) at epoch 88, step 26000: bleu: 15.57, loss: 52550.0938, ppl: 9.2762, duration: 45.2529s\n", "2019-12-30 10:07:53,716 Epoch 88: total training loss 336.02\n", "2019-12-30 10:07:53,717 EPOCH 89\n", "2019-12-30 10:08:05,683 Epoch 89 Step: 26100 Batch Loss: 1.445318 Tokens per Sec: 13399, Lr: 0.000210\n", "2019-12-30 10:08:21,136 Epoch 89 Step: 26200 Batch Loss: 1.473479 Tokens per Sec: 13519, Lr: 0.000210\n", "2019-12-30 10:08:36,552 Epoch 89 Step: 26300 Batch Loss: 1.242839 Tokens per Sec: 13666, Lr: 0.000210\n", "2019-12-30 10:08:38,998 Epoch 89: total training loss 331.43\n", "2019-12-30 10:08:38,998 EPOCH 90\n", "2019-12-30 10:08:51,825 Epoch 90 Step: 26400 Batch Loss: 1.238136 Tokens per Sec: 13505, Lr: 0.000210\n", "2019-12-30 10:09:07,167 Epoch 90 Step: 26500 Batch Loss: 1.291831 Tokens per Sec: 13550, Lr: 0.000210\n", "2019-12-30 10:09:22,472 Epoch 90 Step: 26600 Batch Loss: 0.970429 Tokens per Sec: 13573, Lr: 0.000210\n", "2019-12-30 10:09:24,330 Epoch 90: total training loss 333.01\n", "2019-12-30 10:09:24,331 EPOCH 91\n", "2019-12-30 10:09:37,708 Epoch 91 Step: 26700 Batch Loss: 1.102858 Tokens per Sec: 13387, Lr: 0.000210\n", "2019-12-30 10:09:53,107 Epoch 91 Step: 26800 Batch Loss: 1.221982 Tokens per Sec: 13701, Lr: 0.000210\n", "2019-12-30 10:10:08,264 Epoch 91 Step: 26900 Batch Loss: 1.123548 Tokens per Sec: 13329, Lr: 0.000210\n", "2019-12-30 10:10:09,771 Epoch 91: total training loss 334.18\n", "2019-12-30 10:10:09,771 EPOCH 92\n", "2019-12-30 10:10:23,332 Epoch 92 Step: 27000 Batch Loss: 1.236489 Tokens per Sec: 13483, Lr: 0.000210\n", "2019-12-30 10:11:08,636 Example #0\n", "2019-12-30 10:11:08,636 \tSource: These orchestral arrangements are composed in such a way that they will prepare our heart and mind for the program to follow .\n", "2019-12-30 10:11:08,636 \tReference: E ru uhworo nana vwẹ idjerhe ro de se muegbe rẹ ubiudu rẹ avwanre hẹrhẹ ọrhuẹrẹphiyọ rẹ ẹdẹ yena .\n", "2019-12-30 10:11:08,636 \tHypothesis: Enana cha nẹrhẹ a riẹn asan rẹ ubiudu rẹ ihwo cha dia vwẹ idjerhe rẹ ẹwẹn avwanre vwo roro kpahen ẹdia rẹ ẹwẹn rẹ avwanre hepha .\n", "2019-12-30 10:11:08,636 Example #1\n", "2019-12-30 10:11:08,636 \tSource: Today he is serving at Bethel .\n", "2019-12-30 10:11:08,637 \tReference: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 10:11:08,637 \tHypothesis: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 10:11:08,637 Example #2\n", "2019-12-30 10:11:08,637 \tSource: But freedom from what ?\n", "2019-12-30 10:11:08,637 \tReference: Ẹkẹvuọvo , ẹdia vọ yen egbomọphẹ na che si ayen nu ?\n", "2019-12-30 10:11:08,637 \tHypothesis: Ẹkẹvuọvo , egbomọphẹ vọ yen rhe ?\n", "2019-12-30 10:11:08,637 Example #3\n", "2019-12-30 10:11:08,637 \tSource: Avoid comparing your new congregation with your previous one .\n", "2019-12-30 10:11:08,637 \tReference: Wọ vwẹ ukoko kpokpọ na vwọ vwanvwen ọ rẹ wo nurhe na - a .\n", "2019-12-30 10:11:08,637 \tHypothesis: Wọ vwẹ ukoko wẹn vwo mu , je vuẹ ohwo ọvo nẹ ọ dia ọvo usun rẹ iniọvo wẹn .\n", "2019-12-30 10:11:08,637 Validation result (greedy) at epoch 92, step 27000: bleu: 16.13, loss: 52610.7617, ppl: 9.3001, duration: 45.3047s\n", "2019-12-30 10:11:23,992 Epoch 92 Step: 27100 Batch Loss: 0.877564 Tokens per Sec: 13466, Lr: 0.000210\n", "2019-12-30 10:11:39,286 Epoch 92 Step: 27200 Batch Loss: 1.209882 Tokens per Sec: 13874, Lr: 0.000210\n", "2019-12-30 10:11:40,172 Epoch 92: total training loss 330.28\n", "2019-12-30 10:11:40,172 EPOCH 93\n", "2019-12-30 10:11:54,490 Epoch 93 Step: 27300 Batch Loss: 1.441384 Tokens per Sec: 13198, Lr: 0.000210\n", "2019-12-30 10:12:09,831 Epoch 93 Step: 27400 Batch Loss: 1.198710 Tokens per Sec: 13528, Lr: 0.000210\n", "2019-12-30 10:12:25,321 Epoch 93 Step: 27500 Batch Loss: 1.292060 Tokens per Sec: 13926, Lr: 0.000210\n", "2019-12-30 10:12:25,322 Epoch 93: total training loss 325.73\n", "2019-12-30 10:12:25,322 EPOCH 94\n", "2019-12-30 10:12:40,563 Epoch 94 Step: 27600 Batch Loss: 1.099503 Tokens per Sec: 13353, Lr: 0.000210\n", "2019-12-30 10:12:55,858 Epoch 94 Step: 27700 Batch Loss: 1.205656 Tokens per Sec: 13632, Lr: 0.000210\n", "2019-12-30 10:13:10,808 Epoch 94: total training loss 326.84\n", "2019-12-30 10:13:10,808 EPOCH 95\n", "2019-12-30 10:13:11,151 Epoch 95 Step: 27800 Batch Loss: 1.401190 Tokens per Sec: 10489, Lr: 0.000210\n", "2019-12-30 10:13:26,349 Epoch 95 Step: 27900 Batch Loss: 1.133032 Tokens per Sec: 13561, Lr: 0.000210\n", "2019-12-30 10:13:41,633 Epoch 95 Step: 28000 Batch Loss: 0.905591 Tokens per Sec: 13395, Lr: 0.000210\n", "2019-12-30 10:14:26,816 Example #0\n", "2019-12-30 10:14:26,816 \tSource: These orchestral arrangements are composed in such a way that they will prepare our heart and mind for the program to follow .\n", "2019-12-30 10:14:26,816 \tReference: E ru uhworo nana vwẹ idjerhe ro de se muegbe rẹ ubiudu rẹ avwanre hẹrhẹ ọrhuẹrẹphiyọ rẹ ẹdẹ yena .\n", "2019-12-30 10:14:26,816 \tHypothesis: Enana vwo ọrhuẹrẹphiyotọ rẹ ayen che vwo ru ọnana vwẹ idjerhe rẹ ẹwẹn rẹ avwanre che vwo ẹruọ rẹ obo rehẹ ubiudu rẹ avwanre .\n", "2019-12-30 10:14:26,817 Example #1\n", "2019-12-30 10:14:26,817 \tSource: Today he is serving at Bethel .\n", "2019-12-30 10:14:26,817 \tReference: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 10:14:26,817 \tHypothesis: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 10:14:26,817 Example #2\n", "2019-12-30 10:14:26,817 \tSource: But freedom from what ?\n", "2019-12-30 10:14:26,817 \tReference: Ẹkẹvuọvo , ẹdia vọ yen egbomọphẹ na che si ayen nu ?\n", "2019-12-30 10:14:26,817 \tHypothesis: Ẹkẹvuọvo , egbomọphẹ vọ yen che nẹ obọ rọyen rhe ?\n", "2019-12-30 10:14:26,817 Example #3\n", "2019-12-30 10:14:26,817 \tSource: Avoid comparing your new congregation with your previous one .\n", "2019-12-30 10:14:26,817 \tReference: Wọ vwẹ ukoko kpokpọ na vwọ vwanvwen ọ rẹ wo nurhe na - a .\n", "2019-12-30 10:14:26,817 \tHypothesis: Wọ vwẹ ukoko wẹn vwo dje nẹ ọ dia ọvo usun rẹ iniọvo wẹn na - a .\n", "2019-12-30 10:14:26,818 Validation result (greedy) at epoch 95, step 28000: bleu: 15.92, loss: 53156.5078, ppl: 9.5177, duration: 45.1845s\n", "2019-12-30 10:14:41,377 Epoch 95: total training loss 325.51\n", "2019-12-30 10:14:41,377 EPOCH 96\n", "2019-12-30 10:14:42,201 Epoch 96 Step: 28100 Batch Loss: 1.093741 Tokens per Sec: 12809, Lr: 0.000210\n", "2019-12-30 10:14:57,321 Epoch 96 Step: 28200 Batch Loss: 0.592021 Tokens per Sec: 13432, Lr: 0.000210\n", "2019-12-30 10:15:12,622 Epoch 96 Step: 28300 Batch Loss: 0.974297 Tokens per Sec: 13678, Lr: 0.000210\n", "2019-12-30 10:15:26,558 Epoch 96: total training loss 323.10\n", "2019-12-30 10:15:26,558 EPOCH 97\n", "2019-12-30 10:15:28,021 Epoch 97 Step: 28400 Batch Loss: 1.017588 Tokens per Sec: 14376, Lr: 0.000210\n", "2019-12-30 10:15:43,240 Epoch 97 Step: 28500 Batch Loss: 1.200243 Tokens per Sec: 13410, Lr: 0.000210\n", "2019-12-30 10:15:58,332 Epoch 97 Step: 28600 Batch Loss: 1.045490 Tokens per Sec: 13531, Lr: 0.000210\n", "2019-12-30 10:16:12,086 Epoch 97: total training loss 323.85\n", "2019-12-30 10:16:12,086 EPOCH 98\n", "2019-12-30 10:16:13,691 Epoch 98 Step: 28700 Batch Loss: 1.128356 Tokens per Sec: 14454, Lr: 0.000210\n", "2019-12-30 10:16:28,884 Epoch 98 Step: 28800 Batch Loss: 0.435680 Tokens per Sec: 13360, Lr: 0.000210\n", "2019-12-30 10:16:43,979 Epoch 98 Step: 28900 Batch Loss: 0.565928 Tokens per Sec: 13409, Lr: 0.000210\n", "2019-12-30 10:16:57,244 Epoch 98: total training loss 320.50\n", "2019-12-30 10:16:57,244 EPOCH 99\n", "2019-12-30 10:16:59,284 Epoch 99 Step: 29000 Batch Loss: 1.331808 Tokens per Sec: 13162, Lr: 0.000210\n", "2019-12-30 10:17:44,565 Example #0\n", "2019-12-30 10:17:44,565 \tSource: These orchestral arrangements are composed in such a way that they will prepare our heart and mind for the program to follow .\n", "2019-12-30 10:17:44,565 \tReference: E ru uhworo nana vwẹ idjerhe ro de se muegbe rẹ ubiudu rẹ avwanre hẹrhẹ ọrhuẹrẹphiyọ rẹ ẹdẹ yena .\n", "2019-12-30 10:17:44,565 \tHypothesis: Enana vwo ọrhuẹrẹphiyotọ rẹ ayen che vwo ru ọnana .\n", "2019-12-30 10:17:44,565 Example #1\n", "2019-12-30 10:17:44,566 \tSource: Today he is serving at Bethel .\n", "2019-12-30 10:17:44,566 \tReference: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 10:17:44,566 \tHypothesis: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 10:17:44,566 Example #2\n", "2019-12-30 10:17:44,566 \tSource: But freedom from what ?\n", "2019-12-30 10:17:44,566 \tReference: Ẹkẹvuọvo , ẹdia vọ yen egbomọphẹ na che si ayen nu ?\n", "2019-12-30 10:17:44,566 \tHypothesis: Ẹkẹvuọvo , egbomọphẹ vwo ?\n", "2019-12-30 10:17:44,566 Example #3\n", "2019-12-30 10:17:44,566 \tSource: Avoid comparing your new congregation with your previous one .\n", "2019-12-30 10:17:44,566 \tReference: Wọ vwẹ ukoko kpokpọ na vwọ vwanvwen ọ rẹ wo nurhe na - a .\n", "2019-12-30 10:17:44,566 \tHypothesis: Wọ vwẹ ukoko wẹn vwo dje ekpokpọ phia , je davwẹngba wẹn eje vwọ riẹriẹ .\n", "2019-12-30 10:17:44,566 Validation result (greedy) at epoch 99, step 29000: bleu: 15.88, loss: 53019.1875, ppl: 9.4625, duration: 45.2818s\n", "2019-12-30 10:17:59,937 Epoch 99 Step: 29100 Batch Loss: 1.156618 Tokens per Sec: 13832, Lr: 0.000210\n", "2019-12-30 10:18:15,201 Epoch 99 Step: 29200 Batch Loss: 1.210035 Tokens per Sec: 13486, Lr: 0.000210\n", "2019-12-30 10:18:27,507 Epoch 99: total training loss 316.36\n", "2019-12-30 10:18:27,508 EPOCH 100\n", "2019-12-30 10:18:30,294 Epoch 100 Step: 29300 Batch Loss: 1.309861 Tokens per Sec: 13197, Lr: 0.000210\n", "2019-12-30 10:18:45,668 Epoch 100 Step: 29400 Batch Loss: 0.720334 Tokens per Sec: 13491, Lr: 0.000210\n", "2019-12-30 10:19:00,987 Epoch 100 Step: 29500 Batch Loss: 0.822227 Tokens per Sec: 13839, Lr: 0.000210\n", "2019-12-30 10:19:12,412 Epoch 100: total training loss 313.65\n", "2019-12-30 10:19:12,412 EPOCH 101\n", "2019-12-30 10:19:16,265 Epoch 101 Step: 29600 Batch Loss: 1.002169 Tokens per Sec: 13385, Lr: 0.000210\n", "2019-12-30 10:19:31,504 Epoch 101 Step: 29700 Batch Loss: 1.094298 Tokens per Sec: 13669, Lr: 0.000210\n", "2019-12-30 10:19:46,917 Epoch 101 Step: 29800 Batch Loss: 1.373768 Tokens per Sec: 13757, Lr: 0.000210\n", "2019-12-30 10:19:57,663 Epoch 101: total training loss 314.43\n", "2019-12-30 10:19:57,663 EPOCH 102\n", "2019-12-30 10:20:02,045 Epoch 102 Step: 29900 Batch Loss: 1.143828 Tokens per Sec: 14075, Lr: 0.000210\n", "2019-12-30 10:20:17,263 Epoch 102 Step: 30000 Batch Loss: 1.013143 Tokens per Sec: 13324, Lr: 0.000210\n", "2019-12-30 10:21:02,559 Example #0\n", "2019-12-30 10:21:02,559 \tSource: These orchestral arrangements are composed in such a way that they will prepare our heart and mind for the program to follow .\n", "2019-12-30 10:21:02,560 \tReference: E ru uhworo nana vwẹ idjerhe ro de se muegbe rẹ ubiudu rẹ avwanre hẹrhẹ ọrhuẹrẹphiyọ rẹ ẹdẹ yena .\n", "2019-12-30 10:21:02,560 \tHypothesis: Enana cha nẹrhẹ a riẹn asan rẹ ubiudu rẹ avwanre hepha , ọ me je nẹrhẹ a karophiyọ ayen .\n", "2019-12-30 10:21:02,560 Example #1\n", "2019-12-30 10:21:02,560 \tSource: Today he is serving at Bethel .\n", "2019-12-30 10:21:02,560 \tReference: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 10:21:02,560 \tHypothesis: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 10:21:02,560 Example #2\n", "2019-12-30 10:21:02,560 \tSource: But freedom from what ?\n", "2019-12-30 10:21:02,560 \tReference: Ẹkẹvuọvo , ẹdia vọ yen egbomọphẹ na che si ayen nu ?\n", "2019-12-30 10:21:02,560 \tHypothesis: ( Rom 6 : ​ 4 - 6 ) Ẹkẹvuọvo , die yen egbomọphẹ ?\n", "2019-12-30 10:21:02,560 Example #3\n", "2019-12-30 10:21:02,560 \tSource: Avoid comparing your new congregation with your previous one .\n", "2019-12-30 10:21:02,561 \tReference: Wọ vwẹ ukoko kpokpọ na vwọ vwanvwen ọ rẹ wo nurhe na - a .\n", "2019-12-30 10:21:02,561 \tHypothesis: Wọ vwẹ ukoko kpokpọ vwo dje eghọn ukoko wẹn phia .\n", "2019-12-30 10:21:02,561 Validation result (greedy) at epoch 102, step 30000: bleu: 15.73, loss: 53459.3438, ppl: 9.6407, duration: 45.2976s\n", "2019-12-30 10:21:17,843 Epoch 102 Step: 30100 Batch Loss: 1.239830 Tokens per Sec: 13499, Lr: 0.000147\n", "2019-12-30 10:21:28,357 Epoch 102: total training loss 311.40\n", "2019-12-30 10:21:28,358 EPOCH 103\n", "2019-12-30 10:21:33,089 Epoch 103 Step: 30200 Batch Loss: 1.319486 Tokens per Sec: 12994, Lr: 0.000147\n", "2019-12-30 10:21:48,448 Epoch 103 Step: 30300 Batch Loss: 1.041703 Tokens per Sec: 13761, Lr: 0.000147\n", "2019-12-30 10:22:03,920 Epoch 103 Step: 30400 Batch Loss: 0.983515 Tokens per Sec: 13640, Lr: 0.000147\n", "2019-12-30 10:22:13,773 Epoch 103: total training loss 306.53\n", "2019-12-30 10:22:13,773 EPOCH 104\n", "2019-12-30 10:22:18,970 Epoch 104 Step: 30500 Batch Loss: 0.825719 Tokens per Sec: 13330, Lr: 0.000147\n", "2019-12-30 10:22:34,210 Epoch 104 Step: 30600 Batch Loss: 1.115937 Tokens per Sec: 13738, Lr: 0.000147\n", "2019-12-30 10:22:49,502 Epoch 104 Step: 30700 Batch Loss: 1.124837 Tokens per Sec: 13348, Lr: 0.000147\n", "2019-12-30 10:22:58,921 Epoch 104: total training loss 302.26\n", "2019-12-30 10:22:58,921 EPOCH 105\n", "2019-12-30 10:23:04,842 Epoch 105 Step: 30800 Batch Loss: 1.066686 Tokens per Sec: 13527, Lr: 0.000147\n", "2019-12-30 10:23:20,223 Epoch 105 Step: 30900 Batch Loss: 1.230033 Tokens per Sec: 13731, Lr: 0.000147\n", "2019-12-30 10:23:35,415 Epoch 105 Step: 31000 Batch Loss: 0.995581 Tokens per Sec: 13368, Lr: 0.000147\n", "2019-12-30 10:24:20,660 Example #0\n", "2019-12-30 10:24:20,660 \tSource: These orchestral arrangements are composed in such a way that they will prepare our heart and mind for the program to follow .\n", "2019-12-30 10:24:20,661 \tReference: E ru uhworo nana vwẹ idjerhe ro de se muegbe rẹ ubiudu rẹ avwanre hẹrhẹ ọrhuẹrẹphiyọ rẹ ẹdẹ yena .\n", "2019-12-30 10:24:20,661 \tHypothesis: Enana vwo erọnvwọn evo rẹ ayen che ru vwẹ idjerhe rẹ ẹwẹn avwanre cha vwọ riẹn nẹ ayen che ru vẹ ẹwẹn rẹ avwanre .\n", "2019-12-30 10:24:20,661 Example #1\n", "2019-12-30 10:24:20,661 \tSource: Today he is serving at Bethel .\n", "2019-12-30 10:24:20,661 \tReference: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 10:24:20,661 \tHypothesis: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 10:24:20,661 Example #2\n", "2019-12-30 10:24:20,661 \tSource: But freedom from what ?\n", "2019-12-30 10:24:20,661 \tReference: Ẹkẹvuọvo , ẹdia vọ yen egbomọphẹ na che si ayen nu ?\n", "2019-12-30 10:24:20,661 \tHypothesis: ( Rom 5 : ​ 4 - 6 ) Ẹkẹvuọvo , egbomọphẹ vọ kọyen rhe ?\n", "2019-12-30 10:24:20,661 Example #3\n", "2019-12-30 10:24:20,662 \tSource: Avoid comparing your new congregation with your previous one .\n", "2019-12-30 10:24:20,662 \tReference: Wọ vwẹ ukoko kpokpọ na vwọ vwanvwen ọ rẹ wo nurhe na - a .\n", "2019-12-30 10:24:20,662 \tHypothesis: Wọ vwẹ ukoko wẹn vwo dje nẹ ọ dia ọvo usun rẹ iniọvo wẹn na - a .\n", "2019-12-30 10:24:20,662 Validation result (greedy) at epoch 105, step 31000: bleu: 15.55, loss: 53562.0625, ppl: 9.6828, duration: 45.2470s\n", "2019-12-30 10:24:29,492 Epoch 105: total training loss 302.17\n", "2019-12-30 10:24:29,492 EPOCH 106\n", "2019-12-30 10:24:35,978 Epoch 106 Step: 31100 Batch Loss: 1.097734 Tokens per Sec: 13695, Lr: 0.000147\n", "2019-12-30 10:24:51,236 Epoch 106 Step: 31200 Batch Loss: 0.849957 Tokens per Sec: 13531, Lr: 0.000147\n", "2019-12-30 10:25:06,346 Epoch 106 Step: 31300 Batch Loss: 1.026879 Tokens per Sec: 13512, Lr: 0.000147\n", "2019-12-30 10:25:14,653 Epoch 106: total training loss 300.52\n", "2019-12-30 10:25:14,653 EPOCH 107\n", "2019-12-30 10:25:21,672 Epoch 107 Step: 31400 Batch Loss: 0.778814 Tokens per Sec: 13359, Lr: 0.000147\n", "2019-12-30 10:25:36,925 Epoch 107 Step: 31500 Batch Loss: 1.406457 Tokens per Sec: 13322, Lr: 0.000147\n", "2019-12-30 10:25:52,151 Epoch 107 Step: 31600 Batch Loss: 1.280529 Tokens per Sec: 13532, Lr: 0.000147\n", "2019-12-30 10:26:00,013 Epoch 107: total training loss 300.50\n", "2019-12-30 10:26:00,013 EPOCH 108\n", "2019-12-30 10:26:07,505 Epoch 108 Step: 31700 Batch Loss: 1.333431 Tokens per Sec: 13185, Lr: 0.000147\n", "2019-12-30 10:26:22,759 Epoch 108 Step: 31800 Batch Loss: 0.888862 Tokens per Sec: 13502, Lr: 0.000147\n", "2019-12-30 10:26:38,012 Epoch 108 Step: 31900 Batch Loss: 1.184997 Tokens per Sec: 13637, Lr: 0.000147\n", "2019-12-30 10:26:45,335 Epoch 108: total training loss 297.88\n", "2019-12-30 10:26:45,335 EPOCH 109\n", "2019-12-30 10:26:53,269 Epoch 109 Step: 32000 Batch Loss: 1.249118 Tokens per Sec: 13631, Lr: 0.000147\n", "2019-12-30 10:27:38,429 Example #0\n", "2019-12-30 10:27:38,429 \tSource: These orchestral arrangements are composed in such a way that they will prepare our heart and mind for the program to follow .\n", "2019-12-30 10:27:38,429 \tReference: E ru uhworo nana vwẹ idjerhe ro de se muegbe rẹ ubiudu rẹ avwanre hẹrhẹ ọrhuẹrẹphiyọ rẹ ẹdẹ yena .\n", "2019-12-30 10:27:38,429 \tHypothesis: Enana vwo erọnvwọn evo rẹ ayen cha reyọ ẹdia tiọyena vwọ riẹn iroro rẹ avwanre , je karophiyọ nẹ ayen che ru ọke rẹ ẹwẹn avwanre de nene odjekẹ rẹ Baibol na .\n", "2019-12-30 10:27:38,429 Example #1\n", "2019-12-30 10:27:38,429 \tSource: Today he is serving at Bethel .\n", "2019-12-30 10:27:38,429 \tReference: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 10:27:38,429 \tHypothesis: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 10:27:38,429 Example #2\n", "2019-12-30 10:27:38,430 \tSource: But freedom from what ?\n", "2019-12-30 10:27:38,430 \tReference: Ẹkẹvuọvo , ẹdia vọ yen egbomọphẹ na che si ayen nu ?\n", "2019-12-30 10:27:38,430 \tHypothesis: Ẹkẹvuọvo , die yen egbomọphẹ ?\n", "2019-12-30 10:27:38,430 Example #3\n", "2019-12-30 10:27:38,430 \tSource: Avoid comparing your new congregation with your previous one .\n", "2019-12-30 10:27:38,430 \tReference: Wọ vwẹ ukoko kpokpọ na vwọ vwanvwen ọ rẹ wo nurhe na - a .\n", "2019-12-30 10:27:38,430 \tHypothesis: Wọ vwẹ ukoko wẹn vwo dje nẹ wo vwo ukoko kpokpọ .\n", "2019-12-30 10:27:38,430 Validation result (greedy) at epoch 109, step 32000: bleu: 15.79, loss: 53618.8398, ppl: 9.7061, duration: 45.1612s\n", "2019-12-30 10:27:53,814 Epoch 109 Step: 32100 Batch Loss: 1.263824 Tokens per Sec: 13868, Lr: 0.000147\n", "2019-12-30 10:28:08,967 Epoch 109 Step: 32200 Batch Loss: 1.315072 Tokens per Sec: 13347, Lr: 0.000147\n", "2019-12-30 10:28:15,537 Epoch 109: total training loss 295.45\n", "2019-12-30 10:28:15,538 EPOCH 110\n", "2019-12-30 10:28:24,193 Epoch 110 Step: 32300 Batch Loss: 1.013196 Tokens per Sec: 13284, Lr: 0.000147\n", "2019-12-30 10:28:39,287 Epoch 110 Step: 32400 Batch Loss: 1.117941 Tokens per Sec: 13497, Lr: 0.000147\n", "2019-12-30 10:28:54,600 Epoch 110 Step: 32500 Batch Loss: 0.877468 Tokens per Sec: 13891, Lr: 0.000147\n", "2019-12-30 10:29:00,558 Epoch 110: total training loss 296.97\n", "2019-12-30 10:29:00,559 EPOCH 111\n", "2019-12-30 10:29:09,805 Epoch 111 Step: 32600 Batch Loss: 1.174822 Tokens per Sec: 13543, Lr: 0.000147\n", "2019-12-30 10:29:25,040 Epoch 111 Step: 32700 Batch Loss: 1.188571 Tokens per Sec: 13600, Lr: 0.000147\n", "2019-12-30 10:29:40,500 Epoch 111 Step: 32800 Batch Loss: 1.212834 Tokens per Sec: 13665, Lr: 0.000147\n", "2019-12-30 10:29:45,528 Epoch 111: total training loss 291.47\n", "2019-12-30 10:29:45,528 EPOCH 112\n", "2019-12-30 10:29:55,818 Epoch 112 Step: 32900 Batch Loss: 1.199140 Tokens per Sec: 13539, Lr: 0.000147\n", "2019-12-30 10:30:10,761 Epoch 112 Step: 33000 Batch Loss: 0.922446 Tokens per Sec: 13319, Lr: 0.000147\n", "2019-12-30 10:30:55,998 Example #0\n", "2019-12-30 10:30:55,998 \tSource: These orchestral arrangements are composed in such a way that they will prepare our heart and mind for the program to follow .\n", "2019-12-30 10:30:55,998 \tReference: E ru uhworo nana vwẹ idjerhe ro de se muegbe rẹ ubiudu rẹ avwanre hẹrhẹ ọrhuẹrẹphiyọ rẹ ẹdẹ yena .\n", "2019-12-30 10:30:55,998 \tHypothesis: Enana vwo erọnvwọn evo rẹ ayen cha reyọ ẹdia na vwo muegbe rẹ ubiudu vẹ iroro rẹ avwanre .\n", "2019-12-30 10:30:55,998 Example #1\n", "2019-12-30 10:30:55,998 \tSource: Today he is serving at Bethel .\n", "2019-12-30 10:30:55,998 \tReference: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 10:30:55,998 \tHypothesis: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 10:30:55,998 Example #2\n", "2019-12-30 10:30:55,998 \tSource: But freedom from what ?\n", "2019-12-30 10:30:55,998 \tReference: Ẹkẹvuọvo , ẹdia vọ yen egbomọphẹ na che si ayen nu ?\n", "2019-12-30 10:30:55,999 \tHypothesis: ( Rom 6 : 1 - 4 ) Ẹkẹvuọvo , egbomọphẹ vọ yen rhe ?\n", "2019-12-30 10:30:55,999 Example #3\n", "2019-12-30 10:30:55,999 \tSource: Avoid comparing your new congregation with your previous one .\n", "2019-12-30 10:30:55,999 \tReference: Wọ vwẹ ukoko kpokpọ na vwọ vwanvwen ọ rẹ wo nurhe na - a .\n", "2019-12-30 10:30:55,999 \tHypothesis: Wọ vwẹ ukoko wẹn vwo dje nẹ ọ dia ọvo usun rẹ iniọvo wẹn ri vwo uruemu tiọyena - a .\n", "2019-12-30 10:30:55,999 Validation result (greedy) at epoch 112, step 33000: bleu: 15.86, loss: 53921.9727, ppl: 9.8316, duration: 45.2379s\n", "2019-12-30 10:31:11,480 Epoch 112 Step: 33100 Batch Loss: 1.313844 Tokens per Sec: 13921, Lr: 0.000147\n", "2019-12-30 10:31:15,991 Epoch 112: total training loss 294.62\n", "2019-12-30 10:31:15,991 EPOCH 113\n", "2019-12-30 10:31:26,586 Epoch 113 Step: 33200 Batch Loss: 0.515194 Tokens per Sec: 13456, Lr: 0.000147\n", "2019-12-30 10:31:42,004 Epoch 113 Step: 33300 Batch Loss: 0.845201 Tokens per Sec: 13636, Lr: 0.000147\n", "2019-12-30 10:31:57,302 Epoch 113 Step: 33400 Batch Loss: 1.236781 Tokens per Sec: 13478, Lr: 0.000147\n", "2019-12-30 10:32:01,250 Epoch 113: total training loss 292.62\n", "2019-12-30 10:32:01,250 EPOCH 114\n", "2019-12-30 10:32:12,503 Epoch 114 Step: 33500 Batch Loss: 0.508112 Tokens per Sec: 13451, Lr: 0.000147\n", "2019-12-30 10:32:27,816 Epoch 114 Step: 33600 Batch Loss: 1.309439 Tokens per Sec: 13410, Lr: 0.000147\n", "2019-12-30 10:32:43,350 Epoch 114 Step: 33700 Batch Loss: 0.512418 Tokens per Sec: 13735, Lr: 0.000147\n", "2019-12-30 10:32:46,565 Epoch 114: total training loss 290.18\n", "2019-12-30 10:32:46,565 EPOCH 115\n", "2019-12-30 10:32:58,618 Epoch 115 Step: 33800 Batch Loss: 1.116151 Tokens per Sec: 13551, Lr: 0.000147\n", "2019-12-30 10:33:13,768 Epoch 115 Step: 33900 Batch Loss: 1.247758 Tokens per Sec: 13363, Lr: 0.000147\n", "2019-12-30 10:33:29,098 Epoch 115 Step: 34000 Batch Loss: 1.157068 Tokens per Sec: 13669, Lr: 0.000147\n", "2019-12-30 10:34:14,251 Example #0\n", "2019-12-30 10:34:14,252 \tSource: These orchestral arrangements are composed in such a way that they will prepare our heart and mind for the program to follow .\n", "2019-12-30 10:34:14,252 \tReference: E ru uhworo nana vwẹ idjerhe ro de se muegbe rẹ ubiudu rẹ avwanre hẹrhẹ ọrhuẹrẹphiyọ rẹ ẹdẹ yena .\n", "2019-12-30 10:34:14,252 \tHypothesis: Enana vwo ọrhuẹrẹphiyotọ rẹ ayen che vwo nene oborẹ ayen cha ta vwẹ idjerhe rẹ ẹwẹn avwanre vwọ nabọ karophiyọ ohwohwo .\n", "2019-12-30 10:34:14,252 Example #1\n", "2019-12-30 10:34:14,252 \tSource: Today he is serving at Bethel .\n", "2019-12-30 10:34:14,252 \tReference: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 10:34:14,252 \tHypothesis: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 10:34:14,252 Example #2\n", "2019-12-30 10:34:14,252 \tSource: But freedom from what ?\n", "2019-12-30 10:34:14,252 \tReference: Ẹkẹvuọvo , ẹdia vọ yen egbomọphẹ na che si ayen nu ?\n", "2019-12-30 10:34:14,253 \tHypothesis: Kẹ egbomọphẹ vwo ?\n", "2019-12-30 10:34:14,253 Example #3\n", "2019-12-30 10:34:14,253 \tSource: Avoid comparing your new congregation with your previous one .\n", "2019-12-30 10:34:14,253 \tReference: Wọ vwẹ ukoko kpokpọ na vwọ vwanvwen ọ rẹ wo nurhe na - a .\n", "2019-12-30 10:34:14,253 \tHypothesis: Wọ vwẹ ukoko wẹn vwo dje nẹ ọ ghwa họhọ ohwo ọvo wo vwo ẹghwọ - ọ .\n", "2019-12-30 10:34:14,253 Validation result (greedy) at epoch 115, step 34000: bleu: 15.67, loss: 54038.2109, ppl: 9.8802, duration: 45.1542s\n", "2019-12-30 10:34:17,008 Epoch 115: total training loss 291.13\n", "2019-12-30 10:34:17,008 EPOCH 116\n", "2019-12-30 10:34:29,792 Epoch 116 Step: 34100 Batch Loss: 1.012778 Tokens per Sec: 13825, Lr: 0.000147\n", "2019-12-30 10:34:44,853 Epoch 116 Step: 34200 Batch Loss: 1.093912 Tokens per Sec: 13146, Lr: 0.000147\n", "2019-12-30 10:35:00,133 Epoch 116 Step: 34300 Batch Loss: 0.766626 Tokens per Sec: 13577, Lr: 0.000147\n", "2019-12-30 10:35:02,292 Epoch 116: total training loss 288.71\n", "2019-12-30 10:35:02,292 EPOCH 117\n", "2019-12-30 10:35:15,419 Epoch 117 Step: 34400 Batch Loss: 1.092261 Tokens per Sec: 13694, Lr: 0.000147\n", "2019-12-30 10:35:30,721 Epoch 117 Step: 34500 Batch Loss: 1.101712 Tokens per Sec: 13301, Lr: 0.000147\n", "2019-12-30 10:35:46,126 Epoch 117 Step: 34600 Batch Loss: 0.904493 Tokens per Sec: 13583, Lr: 0.000147\n", "2019-12-30 10:35:47,635 Epoch 117: total training loss 288.30\n", "2019-12-30 10:35:47,635 EPOCH 118\n", "2019-12-30 10:36:01,385 Epoch 118 Step: 34700 Batch Loss: 0.610063 Tokens per Sec: 13314, Lr: 0.000147\n", "2019-12-30 10:36:16,705 Epoch 118 Step: 34800 Batch Loss: 1.173708 Tokens per Sec: 13493, Lr: 0.000147\n", "2019-12-30 10:36:32,077 Epoch 118 Step: 34900 Batch Loss: 0.802481 Tokens per Sec: 13706, Lr: 0.000147\n", "2019-12-30 10:36:32,894 Epoch 118: total training loss 286.22\n", "2019-12-30 10:36:32,895 EPOCH 119\n", "2019-12-30 10:36:47,505 Epoch 119 Step: 35000 Batch Loss: 1.195539 Tokens per Sec: 13399, Lr: 0.000147\n", "2019-12-30 10:37:32,709 Example #0\n", "2019-12-30 10:37:32,709 \tSource: These orchestral arrangements are composed in such a way that they will prepare our heart and mind for the program to follow .\n", "2019-12-30 10:37:32,709 \tReference: E ru uhworo nana vwẹ idjerhe ro de se muegbe rẹ ubiudu rẹ avwanre hẹrhẹ ọrhuẹrẹphiyọ rẹ ẹdẹ yena .\n", "2019-12-30 10:37:32,709 \tHypothesis: Enana vwẹ idjerhe sansan vwo ru ọnana .\n", "2019-12-30 10:37:32,709 Example #1\n", "2019-12-30 10:37:32,709 \tSource: Today he is serving at Bethel .\n", "2019-12-30 10:37:32,709 \tReference: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 10:37:32,709 \tHypothesis: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 10:37:32,709 Example #2\n", "2019-12-30 10:37:32,709 \tSource: But freedom from what ?\n", "2019-12-30 10:37:32,709 \tReference: Ẹkẹvuọvo , ẹdia vọ yen egbomọphẹ na che si ayen nu ?\n", "2019-12-30 10:37:32,710 \tHypothesis: Ẹkẹvuọvo , egbomọphẹ vọ yen che nẹ obọ rọyen rhe ?\n", "2019-12-30 10:37:32,710 Example #3\n", "2019-12-30 10:37:32,710 \tSource: Avoid comparing your new congregation with your previous one .\n", "2019-12-30 10:37:32,710 \tReference: Wọ vwẹ ukoko kpokpọ na vwọ vwanvwen ọ rẹ wo nurhe na - a .\n", "2019-12-30 10:37:32,710 \tHypothesis: Wọ vwẹ ukoko wẹn vwo dje nẹ ọ dia ọvo usun rẹ iniọvo wẹn na - a .\n", "2019-12-30 10:37:32,710 Validation result (greedy) at epoch 119, step 35000: bleu: 15.57, loss: 54266.5391, ppl: 9.9763, duration: 45.2045s\n", "2019-12-30 10:37:48,125 Epoch 119 Step: 35100 Batch Loss: 1.147847 Tokens per Sec: 13550, Lr: 0.000147\n", "2019-12-30 10:38:03,129 Epoch 119 Step: 35200 Batch Loss: 1.116531 Tokens per Sec: 13600, Lr: 0.000147\n", "2019-12-30 10:38:03,431 Epoch 119: total training loss 286.54\n", "2019-12-30 10:38:03,432 EPOCH 120\n", "2019-12-30 10:38:18,514 Epoch 120 Step: 35300 Batch Loss: 1.073197 Tokens per Sec: 13561, Lr: 0.000147\n", "2019-12-30 10:38:33,919 Epoch 120 Step: 35400 Batch Loss: 1.215966 Tokens per Sec: 13509, Lr: 0.000147\n", "2019-12-30 10:38:48,691 Epoch 120: total training loss 283.62\n", "2019-12-30 10:38:48,691 EPOCH 121\n", "2019-12-30 10:38:49,194 Epoch 121 Step: 35500 Batch Loss: 1.234547 Tokens per Sec: 11984, Lr: 0.000147\n", "2019-12-30 10:39:04,583 Epoch 121 Step: 35600 Batch Loss: 0.977350 Tokens per Sec: 13535, Lr: 0.000147\n", "2019-12-30 10:39:19,870 Epoch 121 Step: 35700 Batch Loss: 1.279799 Tokens per Sec: 13336, Lr: 0.000147\n", "2019-12-30 10:39:34,246 Epoch 121: total training loss 284.99\n", "2019-12-30 10:39:34,247 EPOCH 122\n", "2019-12-30 10:39:35,396 Epoch 122 Step: 35800 Batch Loss: 1.010502 Tokens per Sec: 12963, Lr: 0.000147\n", "2019-12-30 10:39:50,739 Epoch 122 Step: 35900 Batch Loss: 1.016414 Tokens per Sec: 13491, Lr: 0.000147\n", "2019-12-30 10:40:06,013 Epoch 122 Step: 36000 Batch Loss: 1.045778 Tokens per Sec: 13585, Lr: 0.000147\n", "2019-12-30 10:40:51,335 Example #0\n", "2019-12-30 10:40:51,335 \tSource: These orchestral arrangements are composed in such a way that they will prepare our heart and mind for the program to follow .\n", "2019-12-30 10:40:51,335 \tReference: E ru uhworo nana vwẹ idjerhe ro de se muegbe rẹ ubiudu rẹ avwanre hẹrhẹ ọrhuẹrẹphiyọ rẹ ẹdẹ yena .\n", "2019-12-30 10:40:51,335 \tHypothesis: Enana vwo erọnvwọn efa rẹ ayen che ru vwẹ idjerhe rẹ iroro vẹ uruemu rẹ avwanre cha nẹrhẹ e roro kpahen oborẹ avwanre che ru vwẹ ubiudu rẹ ihwo .\n", "2019-12-30 10:40:51,335 Example #1\n", "2019-12-30 10:40:51,335 \tSource: Today he is serving at Bethel .\n", "2019-12-30 10:40:51,335 \tReference: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 10:40:51,336 \tHypothesis: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 10:40:51,336 Example #2\n", "2019-12-30 10:40:51,336 \tSource: But freedom from what ?\n", "2019-12-30 10:40:51,336 \tReference: Ẹkẹvuọvo , ẹdia vọ yen egbomọphẹ na che si ayen nu ?\n", "2019-12-30 10:40:51,336 \tHypothesis: Ẹkẹvuọvo , egbomọphẹ vọ yen che nẹ obọ rọyen rhe ?\n", "2019-12-30 10:40:51,336 Example #3\n", "2019-12-30 10:40:51,336 \tSource: Avoid comparing your new congregation with your previous one .\n", "2019-12-30 10:40:51,336 \tReference: Wọ vwẹ ukoko kpokpọ na vwọ vwanvwen ọ rẹ wo nurhe na - a .\n", "2019-12-30 10:40:51,336 \tHypothesis: Wọ vwẹ ukoko wẹn vwo dje nẹ ọ dia ọvo usun rẹ iniọvo wẹn ri vwo uduefiogbere - e .\n", "2019-12-30 10:40:51,336 Validation result (greedy) at epoch 122, step 36000: bleu: 15.82, loss: 54322.6797, ppl: 10.0000, duration: 45.3231s\n", "2019-12-30 10:41:04,880 Epoch 122: total training loss 283.94\n", "2019-12-30 10:41:04,880 EPOCH 123\n", "2019-12-30 10:41:06,621 Epoch 123 Step: 36100 Batch Loss: 0.732674 Tokens per Sec: 13490, Lr: 0.000103\n", "2019-12-30 10:41:22,051 Epoch 123 Step: 36200 Batch Loss: 1.122066 Tokens per Sec: 13787, Lr: 0.000103\n", "2019-12-30 10:41:37,293 Epoch 123 Step: 36300 Batch Loss: 0.548200 Tokens per Sec: 13466, Lr: 0.000103\n", "2019-12-30 10:41:50,268 Epoch 123: total training loss 278.64\n", "2019-12-30 10:41:50,268 EPOCH 124\n", "2019-12-30 10:41:52,651 Epoch 124 Step: 36400 Batch Loss: 0.722266 Tokens per Sec: 13496, Lr: 0.000103\n", "2019-12-30 10:42:08,057 Epoch 124 Step: 36500 Batch Loss: 1.149965 Tokens per Sec: 13618, Lr: 0.000103\n", "2019-12-30 10:42:23,187 Epoch 124 Step: 36600 Batch Loss: 0.978019 Tokens per Sec: 13532, Lr: 0.000103\n", "2019-12-30 10:42:35,562 Epoch 124: total training loss 277.00\n", "2019-12-30 10:42:35,562 EPOCH 125\n", "2019-12-30 10:42:38,524 Epoch 125 Step: 36700 Batch Loss: 0.944548 Tokens per Sec: 13471, Lr: 0.000103\n", "2019-12-30 10:42:53,748 Epoch 125 Step: 36800 Batch Loss: 1.059758 Tokens per Sec: 13464, Lr: 0.000103\n", "2019-12-30 10:43:09,089 Epoch 125 Step: 36900 Batch Loss: 0.990192 Tokens per Sec: 13747, Lr: 0.000103\n", "2019-12-30 10:43:20,736 Epoch 125: total training loss 274.95\n", "2019-12-30 10:43:20,736 EPOCH 126\n", "2019-12-30 10:43:24,412 Epoch 126 Step: 37000 Batch Loss: 0.785760 Tokens per Sec: 12970, Lr: 0.000103\n", "2019-12-30 10:44:09,718 Example #0\n", "2019-12-30 10:44:09,718 \tSource: These orchestral arrangements are composed in such a way that they will prepare our heart and mind for the program to follow .\n", "2019-12-30 10:44:09,718 \tReference: E ru uhworo nana vwẹ idjerhe ro de se muegbe rẹ ubiudu rẹ avwanre hẹrhẹ ọrhuẹrẹphiyọ rẹ ẹdẹ yena .\n", "2019-12-30 10:44:09,718 \tHypothesis: Enana vwo erọnvwọn evo rẹ ayen cha reyọ ẹdia na vwo ruiruo vwẹ idjerhe rọ cha nẹrhẹ a karophiyọ ayen .\n", "2019-12-30 10:44:09,718 Example #1\n", "2019-12-30 10:44:09,718 \tSource: Today he is serving at Bethel .\n", "2019-12-30 10:44:09,718 \tReference: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 10:44:09,718 \tHypothesis: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 10:44:09,719 Example #2\n", "2019-12-30 10:44:09,719 \tSource: But freedom from what ?\n", "2019-12-30 10:44:09,719 \tReference: Ẹkẹvuọvo , ẹdia vọ yen egbomọphẹ na che si ayen nu ?\n", "2019-12-30 10:44:09,719 \tHypothesis: ( Jems 4 : 5 ) Ẹkẹvuọvo , egbomọphẹ vọ yen rhe ohwo obọ ?\n", "2019-12-30 10:44:09,719 Example #3\n", "2019-12-30 10:44:09,719 \tSource: Avoid comparing your new congregation with your previous one .\n", "2019-12-30 10:44:09,719 \tReference: Wọ vwẹ ukoko kpokpọ na vwọ vwanvwen ọ rẹ wo nurhe na - a .\n", "2019-12-30 10:44:09,719 \tHypothesis: Wọ vwẹ ukoko wẹn vwo dje nẹ ọ dia ọvo usun rẹ iniọvo wẹn ri vwo ẹwẹn esiri - ẹ .\n", "2019-12-30 10:44:09,719 Validation result (greedy) at epoch 126, step 37000: bleu: 15.75, loss: 54445.5039, ppl: 10.0522, duration: 45.3071s\n", "2019-12-30 10:44:25,157 Epoch 126 Step: 37100 Batch Loss: 1.017825 Tokens per Sec: 13696, Lr: 0.000103\n", "2019-12-30 10:44:40,493 Epoch 126 Step: 37200 Batch Loss: 0.798517 Tokens per Sec: 13487, Lr: 0.000103\n", "2019-12-30 10:44:51,283 Epoch 126: total training loss 273.43\n", "2019-12-30 10:44:51,283 EPOCH 127\n", "2019-12-30 10:44:55,922 Epoch 127 Step: 37300 Batch Loss: 0.847415 Tokens per Sec: 13056, Lr: 0.000103\n", "2019-12-30 10:45:11,227 Epoch 127 Step: 37400 Batch Loss: 0.511419 Tokens per Sec: 13687, Lr: 0.000103\n", "2019-12-30 10:45:26,507 Epoch 127 Step: 37500 Batch Loss: 0.591248 Tokens per Sec: 13459, Lr: 0.000103\n", "2019-12-30 10:45:36,564 Epoch 127: total training loss 273.25\n", "2019-12-30 10:45:36,564 EPOCH 128\n", "2019-12-30 10:45:41,953 Epoch 128 Step: 37600 Batch Loss: 1.004264 Tokens per Sec: 13313, Lr: 0.000103\n", "2019-12-30 10:45:57,200 Epoch 128 Step: 37700 Batch Loss: 1.035499 Tokens per Sec: 13289, Lr: 0.000103\n", "2019-12-30 10:46:12,601 Epoch 128 Step: 37800 Batch Loss: 0.929391 Tokens per Sec: 13720, Lr: 0.000103\n", "2019-12-30 10:46:22,037 Epoch 128: total training loss 273.83\n", "2019-12-30 10:46:22,037 EPOCH 129\n", "2019-12-30 10:46:27,915 Epoch 129 Step: 37900 Batch Loss: 0.706885 Tokens per Sec: 13686, Lr: 0.000103\n", "2019-12-30 10:46:43,038 Epoch 129 Step: 38000 Batch Loss: 1.152300 Tokens per Sec: 13432, Lr: 0.000103\n", "2019-12-30 10:47:28,358 Example #0\n", "2019-12-30 10:47:28,358 \tSource: These orchestral arrangements are composed in such a way that they will prepare our heart and mind for the program to follow .\n", "2019-12-30 10:47:28,358 \tReference: E ru uhworo nana vwẹ idjerhe ro de se muegbe rẹ ubiudu rẹ avwanre hẹrhẹ ọrhuẹrẹphiyọ rẹ ẹdẹ yena .\n", "2019-12-30 10:47:28,358 \tHypothesis: Enana ghwa evo usun rẹ erọnvwọn re cha nẹrhẹ ayen karophiyọ oborẹ ayen che ru vwẹ ubiudu rẹ avwanre , ji roro kpahen oborẹ avwanre che nene .\n", "2019-12-30 10:47:28,358 Example #1\n", "2019-12-30 10:47:28,359 \tSource: Today he is serving at Bethel .\n", "2019-12-30 10:47:28,359 \tReference: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 10:47:28,359 \tHypothesis: Asaọkiephana , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 10:47:28,359 Example #2\n", "2019-12-30 10:47:28,359 \tSource: But freedom from what ?\n", "2019-12-30 10:47:28,359 \tReference: Ẹkẹvuọvo , ẹdia vọ yen egbomọphẹ na che si ayen nu ?\n", "2019-12-30 10:47:28,359 \tHypothesis: Ẹkẹvuọvo , egbomọphẹ vọ yen che nẹ obọ rọyen rhe ?\n", "2019-12-30 10:47:28,359 Example #3\n", "2019-12-30 10:47:28,359 \tSource: Avoid comparing your new congregation with your previous one .\n", "2019-12-30 10:47:28,359 \tReference: Wọ vwẹ ukoko kpokpọ na vwọ vwanvwen ọ rẹ wo nurhe na - a .\n", "2019-12-30 10:47:28,359 \tHypothesis: Wọ vwẹ ukoko wẹn vwo dje nẹ ọ ghwa ọvo usun rẹ iniọvo wẹn ri vwo ẹwẹn esiri .\n", "2019-12-30 10:47:28,359 Validation result (greedy) at epoch 129, step 38000: bleu: 15.80, loss: 54657.4883, ppl: 10.1430, duration: 45.3206s\n", "2019-12-30 10:47:43,761 Epoch 129 Step: 38100 Batch Loss: 1.024402 Tokens per Sec: 13483, Lr: 0.000103\n", "2019-12-30 10:47:52,636 Epoch 129: total training loss 272.52\n", "2019-12-30 10:47:52,636 EPOCH 130\n", "2019-12-30 10:47:59,156 Epoch 130 Step: 38200 Batch Loss: 1.171645 Tokens per Sec: 13727, Lr: 0.000103\n", "2019-12-30 10:48:14,359 Epoch 130 Step: 38300 Batch Loss: 1.176669 Tokens per Sec: 13231, Lr: 0.000103\n", "2019-12-30 10:48:29,665 Epoch 130 Step: 38400 Batch Loss: 0.868185 Tokens per Sec: 13476, Lr: 0.000103\n", "2019-12-30 10:48:38,023 Epoch 130: total training loss 270.97\n", "2019-12-30 10:48:38,023 EPOCH 131\n", "2019-12-30 10:48:45,104 Epoch 131 Step: 38500 Batch Loss: 0.862965 Tokens per Sec: 13402, Lr: 0.000103\n", "2019-12-30 10:49:00,524 Epoch 131 Step: 38600 Batch Loss: 0.851079 Tokens per Sec: 13642, Lr: 0.000103\n", "2019-12-30 10:49:15,845 Epoch 131 Step: 38700 Batch Loss: 0.964815 Tokens per Sec: 13536, Lr: 0.000103\n", "2019-12-30 10:49:23,363 Epoch 131: total training loss 271.19\n", "2019-12-30 10:49:23,364 EPOCH 132\n", "2019-12-30 10:49:31,112 Epoch 132 Step: 38800 Batch Loss: 0.969305 Tokens per Sec: 13614, Lr: 0.000103\n", "2019-12-30 10:49:46,352 Epoch 132 Step: 38900 Batch Loss: 1.099121 Tokens per Sec: 13440, Lr: 0.000103\n", "2019-12-30 10:50:01,818 Epoch 132 Step: 39000 Batch Loss: 0.760513 Tokens per Sec: 13540, Lr: 0.000103\n", "2019-12-30 10:50:47,065 Example #0\n", "2019-12-30 10:50:47,065 \tSource: These orchestral arrangements are composed in such a way that they will prepare our heart and mind for the program to follow .\n", "2019-12-30 10:50:47,065 \tReference: E ru uhworo nana vwẹ idjerhe ro de se muegbe rẹ ubiudu rẹ avwanre hẹrhẹ ọrhuẹrẹphiyọ rẹ ẹdẹ yena .\n", "2019-12-30 10:50:47,065 \tHypothesis: Enana cha chọn avwanre uko vwọ riẹn oka rẹ ota tiọyen .\n", "2019-12-30 10:50:47,065 Example #1\n", "2019-12-30 10:50:47,065 \tSource: Today he is serving at Bethel .\n", "2019-12-30 10:50:47,065 \tReference: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 10:50:47,065 \tHypothesis: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 10:50:47,065 Example #2\n", "2019-12-30 10:50:47,065 \tSource: But freedom from what ?\n", "2019-12-30 10:50:47,066 \tReference: Ẹkẹvuọvo , ẹdia vọ yen egbomọphẹ na che si ayen nu ?\n", "2019-12-30 10:50:47,066 \tHypothesis: ( Jems 4 : 5 ) Ẹkẹvuọvo , egbomọphẹ vọ yen rhe ?\n", "2019-12-30 10:50:47,066 Example #3\n", "2019-12-30 10:50:47,066 \tSource: Avoid comparing your new congregation with your previous one .\n", "2019-12-30 10:50:47,066 \tReference: Wọ vwẹ ukoko kpokpọ na vwọ vwanvwen ọ rẹ wo nurhe na - a .\n", "2019-12-30 10:50:47,066 \tHypothesis: Wọ vwẹ ukoko wẹn vwo dje ihwo kpokpọ phia - a .\n", "2019-12-30 10:50:47,066 Validation result (greedy) at epoch 132, step 39000: bleu: 16.30, loss: 54734.2656, ppl: 10.1760, duration: 45.2481s\n", "2019-12-30 10:50:53,964 Epoch 132: total training loss 269.37\n", "2019-12-30 10:50:53,965 EPOCH 133\n", "2019-12-30 10:51:02,376 Epoch 133 Step: 39100 Batch Loss: 0.828210 Tokens per Sec: 13389, Lr: 0.000103\n", "2019-12-30 10:51:17,695 Epoch 133 Step: 39200 Batch Loss: 0.759475 Tokens per Sec: 13460, Lr: 0.000103\n", "2019-12-30 10:51:33,104 Epoch 133 Step: 39300 Batch Loss: 0.908652 Tokens per Sec: 13483, Lr: 0.000103\n", "2019-12-30 10:51:39,408 Epoch 133: total training loss 268.90\n", "2019-12-30 10:51:39,408 EPOCH 134\n", "2019-12-30 10:51:48,302 Epoch 134 Step: 39400 Batch Loss: 1.009649 Tokens per Sec: 13389, Lr: 0.000103\n", "2019-12-30 10:52:03,643 Epoch 134 Step: 39500 Batch Loss: 0.512203 Tokens per Sec: 13377, Lr: 0.000103\n", "2019-12-30 10:52:19,019 Epoch 134 Step: 39600 Batch Loss: 0.730646 Tokens per Sec: 13460, Lr: 0.000103\n", "2019-12-30 10:52:24,915 Epoch 134: total training loss 269.32\n", "2019-12-30 10:52:24,915 EPOCH 135\n", "2019-12-30 10:52:34,390 Epoch 135 Step: 39700 Batch Loss: 0.344637 Tokens per Sec: 13481, Lr: 0.000103\n", "2019-12-30 10:52:49,638 Epoch 135 Step: 39800 Batch Loss: 0.930713 Tokens per Sec: 13654, Lr: 0.000103\n", "2019-12-30 10:53:04,939 Epoch 135 Step: 39900 Batch Loss: 1.030611 Tokens per Sec: 13331, Lr: 0.000103\n", "2019-12-30 10:53:10,339 Epoch 135: total training loss 269.09\n", "2019-12-30 10:53:10,339 EPOCH 136\n", "2019-12-30 10:53:20,256 Epoch 136 Step: 40000 Batch Loss: 0.857160 Tokens per Sec: 13902, Lr: 0.000103\n", "2019-12-30 10:54:05,542 Example #0\n", "2019-12-30 10:54:05,543 \tSource: These orchestral arrangements are composed in such a way that they will prepare our heart and mind for the program to follow .\n", "2019-12-30 10:54:05,543 \tReference: E ru uhworo nana vwẹ idjerhe ro de se muegbe rẹ ubiudu rẹ avwanre hẹrhẹ ọrhuẹrẹphiyọ rẹ ẹdẹ yena .\n", "2019-12-30 10:54:05,543 \tHypothesis: Enana vwo erọnvwọn evo rẹ ayen cha reyọ ẹdia na vwo ruiruo vwẹ idjerhe rọ cha nẹrhẹ a karophiyọ ayen .\n", "2019-12-30 10:54:05,543 Example #1\n", "2019-12-30 10:54:05,543 \tSource: Today he is serving at Bethel .\n", "2019-12-30 10:54:05,543 \tReference: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 10:54:05,543 \tHypothesis: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 10:54:05,543 Example #2\n", "2019-12-30 10:54:05,543 \tSource: But freedom from what ?\n", "2019-12-30 10:54:05,543 \tReference: Ẹkẹvuọvo , ẹdia vọ yen egbomọphẹ na che si ayen nu ?\n", "2019-12-30 10:54:05,543 \tHypothesis: ( Jems 3 : 2 ) Ẹkẹvuọvo , egbomọphẹ vọ yen rhe ohwo obọ ?\n", "2019-12-30 10:54:05,543 Example #3\n", "2019-12-30 10:54:05,544 \tSource: Avoid comparing your new congregation with your previous one .\n", "2019-12-30 10:54:05,544 \tReference: Wọ vwẹ ukoko kpokpọ na vwọ vwanvwen ọ rẹ wo nurhe na - a .\n", "2019-12-30 10:54:05,544 \tHypothesis: Wọ vwẹ ukoko wẹn vwo dje ihwo kpokpọ phia .\n", "2019-12-30 10:54:05,544 Validation result (greedy) at epoch 136, step 40000: bleu: 15.98, loss: 54951.5312, ppl: 10.2702, duration: 45.2873s\n", "2019-12-30 10:54:20,776 Epoch 136 Step: 40100 Batch Loss: 0.948818 Tokens per Sec: 13477, Lr: 0.000103\n", "2019-12-30 10:54:36,112 Epoch 136 Step: 40200 Batch Loss: 0.906754 Tokens per Sec: 13361, Lr: 0.000103\n", "2019-12-30 10:54:40,773 Epoch 136: total training loss 265.37\n", "2019-12-30 10:54:40,774 EPOCH 137\n", "2019-12-30 10:54:51,402 Epoch 137 Step: 40300 Batch Loss: 0.909005 Tokens per Sec: 13322, Lr: 0.000103\n", "2019-12-30 10:55:06,799 Epoch 137 Step: 40400 Batch Loss: 0.991097 Tokens per Sec: 13797, Lr: 0.000103\n", "2019-12-30 10:55:21,972 Epoch 137 Step: 40500 Batch Loss: 1.036725 Tokens per Sec: 13272, Lr: 0.000103\n", "2019-12-30 10:55:26,080 Epoch 137: total training loss 267.44\n", "2019-12-30 10:55:26,080 EPOCH 138\n", "2019-12-30 10:55:37,392 Epoch 138 Step: 40600 Batch Loss: 0.776227 Tokens per Sec: 13588, Lr: 0.000103\n", "2019-12-30 10:55:52,827 Epoch 138 Step: 40700 Batch Loss: 0.939818 Tokens per Sec: 13539, Lr: 0.000103\n", "2019-12-30 10:56:08,154 Epoch 138 Step: 40800 Batch Loss: 1.250819 Tokens per Sec: 13468, Lr: 0.000103\n", "2019-12-30 10:56:11,505 Epoch 138: total training loss 264.90\n", "2019-12-30 10:56:11,506 EPOCH 139\n", "2019-12-30 10:56:23,347 Epoch 139 Step: 40900 Batch Loss: 0.792720 Tokens per Sec: 13606, Lr: 0.000103\n", "2019-12-30 10:56:38,746 Epoch 139 Step: 41000 Batch Loss: 0.933622 Tokens per Sec: 13418, Lr: 0.000103\n", "2019-12-30 10:57:24,045 Example #0\n", "2019-12-30 10:57:24,045 \tSource: These orchestral arrangements are composed in such a way that they will prepare our heart and mind for the program to follow .\n", "2019-12-30 10:57:24,045 \tReference: E ru uhworo nana vwẹ idjerhe ro de se muegbe rẹ ubiudu rẹ avwanre hẹrhẹ ọrhuẹrẹphiyọ rẹ ẹdẹ yena .\n", "2019-12-30 10:57:24,045 \tHypothesis: Enana vwo erọnvwọn efa rẹ ayen che ru vwẹ idjerhe rẹ ẹwẹn avwanre cha vwọ riẹn nẹ ọnana cha nẹrhẹ a riẹn ọke rẹ avwanre da karophiyọ emu na .\n", "2019-12-30 10:57:24,045 Example #1\n", "2019-12-30 10:57:24,045 \tSource: Today he is serving at Bethel .\n", "2019-12-30 10:57:24,045 \tReference: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 10:57:24,046 \tHypothesis: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 10:57:24,046 Example #2\n", "2019-12-30 10:57:24,046 \tSource: But freedom from what ?\n", "2019-12-30 10:57:24,046 \tReference: Ẹkẹvuọvo , ẹdia vọ yen egbomọphẹ na che si ayen nu ?\n", "2019-12-30 10:57:24,046 \tHypothesis: ( Jems 3 : 2 ) Ẹkẹvuọvo , die yen egbomọphẹ ?\n", "2019-12-30 10:57:24,046 Example #3\n", "2019-12-30 10:57:24,046 \tSource: Avoid comparing your new congregation with your previous one .\n", "2019-12-30 10:57:24,046 \tReference: Wọ vwẹ ukoko kpokpọ na vwọ vwanvwen ọ rẹ wo nurhe na - a .\n", "2019-12-30 10:57:24,046 \tHypothesis: Wọ vwẹ ukoko wẹn vwo dje eghọn rẹ ukoko wẹn phia .\n", "2019-12-30 10:57:24,046 Validation result (greedy) at epoch 139, step 41000: bleu: 16.30, loss: 55018.7344, ppl: 10.2995, duration: 45.2998s\n", "2019-12-30 10:57:39,548 Epoch 139 Step: 41100 Batch Loss: 0.953699 Tokens per Sec: 13467, Lr: 0.000103\n", "2019-12-30 10:57:42,150 Epoch 139: total training loss 264.29\n", "2019-12-30 10:57:42,150 EPOCH 140\n", "2019-12-30 10:57:54,778 Epoch 140 Step: 41200 Batch Loss: 0.476705 Tokens per Sec: 13323, Lr: 0.000103\n", "2019-12-30 10:58:10,065 Epoch 140 Step: 41300 Batch Loss: 0.677177 Tokens per Sec: 13568, Lr: 0.000103\n", "2019-12-30 10:58:25,452 Epoch 140 Step: 41400 Batch Loss: 1.141634 Tokens per Sec: 13594, Lr: 0.000103\n", "2019-12-30 10:58:27,576 Epoch 140: total training loss 265.54\n", "2019-12-30 10:58:27,576 EPOCH 141\n", "2019-12-30 10:58:40,801 Epoch 141 Step: 41500 Batch Loss: 1.216569 Tokens per Sec: 13569, Lr: 0.000103\n", "2019-12-30 10:58:56,069 Epoch 141 Step: 41600 Batch Loss: 0.911878 Tokens per Sec: 13565, Lr: 0.000103\n", "2019-12-30 10:59:11,382 Epoch 141 Step: 41700 Batch Loss: 1.067541 Tokens per Sec: 13567, Lr: 0.000103\n", "2019-12-30 10:59:12,738 Epoch 141: total training loss 262.44\n", "2019-12-30 10:59:12,738 EPOCH 142\n", "2019-12-30 10:59:26,572 Epoch 142 Step: 41800 Batch Loss: 1.192889 Tokens per Sec: 13255, Lr: 0.000103\n", "2019-12-30 10:59:41,797 Epoch 142 Step: 41900 Batch Loss: 1.019896 Tokens per Sec: 13635, Lr: 0.000103\n", "2019-12-30 10:59:57,077 Epoch 142 Step: 42000 Batch Loss: 0.701142 Tokens per Sec: 13587, Lr: 0.000103\n", "2019-12-30 11:00:42,323 Example #0\n", "2019-12-30 11:00:42,324 \tSource: These orchestral arrangements are composed in such a way that they will prepare our heart and mind for the program to follow .\n", "2019-12-30 11:00:42,324 \tReference: E ru uhworo nana vwẹ idjerhe ro de se muegbe rẹ ubiudu rẹ avwanre hẹrhẹ ọrhuẹrẹphiyọ rẹ ẹdẹ yena .\n", "2019-12-30 11:00:42,324 \tHypothesis: Enana vwo erọnvwọn evo rẹ ayen cha reyọ vwo ruiruo vwẹ idjerhe rẹ ẹwẹn roro kpahen oborẹ ayen che ru , ji vwo ẹwẹn ra vwọ riẹn uyota na .\n", "2019-12-30 11:00:42,324 Example #1\n", "2019-12-30 11:00:42,324 \tSource: Today he is serving at Bethel .\n", "2019-12-30 11:00:42,324 \tReference: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 11:00:42,324 \tHypothesis: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 11:00:42,324 Example #2\n", "2019-12-30 11:00:42,325 \tSource: But freedom from what ?\n", "2019-12-30 11:00:42,325 \tReference: Ẹkẹvuọvo , ẹdia vọ yen egbomọphẹ na che si ayen nu ?\n", "2019-12-30 11:00:42,325 \tHypothesis: ( Rom 5 : 12 ) Ẹkẹvuọvo , egbomọphẹ vọ yen rhe ohwo obọ ?\n", "2019-12-30 11:00:42,325 Example #3\n", "2019-12-30 11:00:42,325 \tSource: Avoid comparing your new congregation with your previous one .\n", "2019-12-30 11:00:42,325 \tReference: Wọ vwẹ ukoko kpokpọ na vwọ vwanvwen ọ rẹ wo nurhe na - a .\n", "2019-12-30 11:00:42,325 \tHypothesis: Wọ vwẹ ukoko wẹn vwo dje eghọn rẹ ukoko wẹn phia .\n", "2019-12-30 11:00:42,325 Validation result (greedy) at epoch 142, step 42000: bleu: 15.90, loss: 55022.0195, ppl: 10.3009, duration: 45.2482s\n", "2019-12-30 11:00:43,393 Epoch 142: total training loss 264.55\n", "2019-12-30 11:00:43,393 EPOCH 143\n", "2019-12-30 11:00:57,732 Epoch 143 Step: 42100 Batch Loss: 0.802422 Tokens per Sec: 13652, Lr: 0.000072\n", "2019-12-30 11:01:12,770 Epoch 143 Step: 42200 Batch Loss: 0.955216 Tokens per Sec: 13336, Lr: 0.000072\n", "2019-12-30 11:01:28,096 Epoch 143 Step: 42300 Batch Loss: 0.867277 Tokens per Sec: 13646, Lr: 0.000072\n", "2019-12-30 11:01:28,677 Epoch 143: total training loss 261.10\n", "2019-12-30 11:01:28,677 EPOCH 144\n", "2019-12-30 11:01:43,557 Epoch 144 Step: 42400 Batch Loss: 0.957274 Tokens per Sec: 13700, Lr: 0.000072\n", "2019-12-30 11:01:58,930 Epoch 144 Step: 42500 Batch Loss: 0.882339 Tokens per Sec: 13523, Lr: 0.000072\n", "2019-12-30 11:02:13,904 Epoch 144: total training loss 257.76\n", "2019-12-30 11:02:13,904 EPOCH 145\n", "2019-12-30 11:02:14,254 Epoch 145 Step: 42600 Batch Loss: 0.679638 Tokens per Sec: 12404, Lr: 0.000072\n", "2019-12-30 11:02:29,524 Epoch 145 Step: 42700 Batch Loss: 1.172125 Tokens per Sec: 13464, Lr: 0.000072\n", "2019-12-30 11:02:44,844 Epoch 145 Step: 42800 Batch Loss: 0.680541 Tokens per Sec: 13553, Lr: 0.000072\n", "2019-12-30 11:02:59,167 Epoch 145: total training loss 257.95\n", "2019-12-30 11:02:59,167 EPOCH 146\n", "2019-12-30 11:03:00,178 Epoch 146 Step: 42900 Batch Loss: 0.599241 Tokens per Sec: 13873, Lr: 0.000072\n", "2019-12-30 11:03:15,567 Epoch 146 Step: 43000 Batch Loss: 0.978298 Tokens per Sec: 13540, Lr: 0.000072\n", "2019-12-30 11:04:00,808 Example #0\n", "2019-12-30 11:04:00,809 \tSource: These orchestral arrangements are composed in such a way that they will prepare our heart and mind for the program to follow .\n", "2019-12-30 11:04:00,809 \tReference: E ru uhworo nana vwẹ idjerhe ro de se muegbe rẹ ubiudu rẹ avwanre hẹrhẹ ọrhuẹrẹphiyọ rẹ ẹdẹ yena .\n", "2019-12-30 11:04:00,809 \tHypothesis: Enana vwo erọnvwọn evo rẹ ayen cha reyọ ẹdia na vwo ruiruo vwẹ idjerhe rọ cha nẹrhẹ a karophiyọ ayen .\n", "2019-12-30 11:04:00,809 Example #1\n", "2019-12-30 11:04:00,809 \tSource: Today he is serving at Bethel .\n", "2019-12-30 11:04:00,809 \tReference: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 11:04:00,809 \tHypothesis: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 11:04:00,809 Example #2\n", "2019-12-30 11:04:00,809 \tSource: But freedom from what ?\n", "2019-12-30 11:04:00,809 \tReference: Ẹkẹvuọvo , ẹdia vọ yen egbomọphẹ na che si ayen nu ?\n", "2019-12-30 11:04:00,809 \tHypothesis: Ẹkẹvuọvo , egbomọphẹ vọ yen che nẹ obọ rọyen rhe ?\n", "2019-12-30 11:04:00,810 Example #3\n", "2019-12-30 11:04:00,810 \tSource: Avoid comparing your new congregation with your previous one .\n", "2019-12-30 11:04:00,810 \tReference: Wọ vwẹ ukoko kpokpọ na vwọ vwanvwen ọ rẹ wo nurhe na - a .\n", "2019-12-30 11:04:00,810 \tHypothesis: Wọ vwẹ itetoro wẹn vwọ kpahotọ kẹ ukoko wẹn .\n", "2019-12-30 11:04:00,810 Validation result (greedy) at epoch 146, step 43000: bleu: 16.16, loss: 55299.5703, ppl: 10.4228, duration: 45.2428s\n", "2019-12-30 11:04:16,102 Epoch 146 Step: 43100 Batch Loss: 1.069764 Tokens per Sec: 13522, Lr: 0.000072\n", "2019-12-30 11:04:29,727 Epoch 146: total training loss 257.56\n", "2019-12-30 11:04:29,727 EPOCH 147\n", "2019-12-30 11:04:31,410 Epoch 147 Step: 43200 Batch Loss: 1.045450 Tokens per Sec: 13025, Lr: 0.000072\n", "2019-12-30 11:04:46,618 Epoch 147 Step: 43300 Batch Loss: 1.013279 Tokens per Sec: 13482, Lr: 0.000072\n", "2019-12-30 11:05:01,949 Epoch 147 Step: 43400 Batch Loss: 1.098271 Tokens per Sec: 13542, Lr: 0.000072\n", "2019-12-30 11:05:14,946 Epoch 147: total training loss 256.91\n", "2019-12-30 11:05:14,947 EPOCH 148\n", "2019-12-30 11:05:17,323 Epoch 148 Step: 43500 Batch Loss: 1.002264 Tokens per Sec: 14181, Lr: 0.000072\n", "2019-12-30 11:05:32,558 Epoch 148 Step: 43600 Batch Loss: 0.847806 Tokens per Sec: 13085, Lr: 0.000072\n", "2019-12-30 11:05:47,933 Epoch 148 Step: 43700 Batch Loss: 1.022300 Tokens per Sec: 13785, Lr: 0.000072\n", "2019-12-30 11:06:00,410 Epoch 148: total training loss 257.46\n", "2019-12-30 11:06:00,410 EPOCH 149\n", "2019-12-30 11:06:03,241 Epoch 149 Step: 43800 Batch Loss: 0.836033 Tokens per Sec: 13278, Lr: 0.000072\n", "2019-12-30 11:06:18,475 Epoch 149 Step: 43900 Batch Loss: 0.791132 Tokens per Sec: 13542, Lr: 0.000072\n", "2019-12-30 11:06:33,692 Epoch 149 Step: 44000 Batch Loss: 1.074546 Tokens per Sec: 13531, Lr: 0.000072\n", "2019-12-30 11:07:18,920 Example #0\n", "2019-12-30 11:07:18,920 \tSource: These orchestral arrangements are composed in such a way that they will prepare our heart and mind for the program to follow .\n", "2019-12-30 11:07:18,920 \tReference: E ru uhworo nana vwẹ idjerhe ro de se muegbe rẹ ubiudu rẹ avwanre hẹrhẹ ọrhuẹrẹphiyọ rẹ ẹdẹ yena .\n", "2019-12-30 11:07:18,920 \tHypothesis: Enana vwẹ idjerhe sansan vwo ru ọnana .\n", "2019-12-30 11:07:18,920 Example #1\n", "2019-12-30 11:07:18,921 \tSource: Today he is serving at Bethel .\n", "2019-12-30 11:07:18,921 \tReference: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 11:07:18,921 \tHypothesis: Nonẹna , ọ ga vwẹ Bẹtẹl .\n", "2019-12-30 11:07:18,921 Example #2\n", "2019-12-30 11:07:18,921 \tSource: But freedom from what ?\n", "2019-12-30 11:07:18,921 \tReference: Ẹkẹvuọvo , ẹdia vọ yen egbomọphẹ na che si ayen nu ?\n", "2019-12-30 11:07:18,921 \tHypothesis: ( Jems 3 : 5 ) Ẹkẹvuọvo , egbomọphẹ vọ yen rhe ?\n", "2019-12-30 11:07:18,921 Example #3\n", "2019-12-30 11:07:18,921 \tSource: Avoid comparing your new congregation with your previous one .\n", "2019-12-30 11:07:18,921 \tReference: Wọ vwẹ ukoko kpokpọ na vwọ vwanvwen ọ rẹ wo nurhe na - a .\n", "2019-12-30 11:07:18,921 \tHypothesis: Wọ vwẹ ukoko wẹn vwo dje eghọn ukoko wẹn phia .\n", "2019-12-30 11:07:18,922 Validation result (greedy) at epoch 149, step 44000: bleu: 16.08, loss: 55201.0625, ppl: 10.3794, duration: 45.2292s\n", "2019-12-30 11:07:30,918 Epoch 149: total training loss 257.30\n", "2019-12-30 11:07:30,918 EPOCH 150\n", "2019-12-30 11:07:34,040 Epoch 150 Step: 44100 Batch Loss: 0.867820 Tokens per Sec: 12924, Lr: 0.000072\n", "2019-12-30 11:07:49,469 Epoch 150 Step: 44200 Batch Loss: 1.006578 Tokens per Sec: 13649, Lr: 0.000072\n", "2019-12-30 11:08:04,866 Epoch 150 Step: 44300 Batch Loss: 0.955338 Tokens per Sec: 13483, Lr: 0.000072\n", "2019-12-30 11:08:16,210 Epoch 150: total training loss 254.53\n", "2019-12-30 11:08:16,211 Training ended after 150 epochs.\n", "2019-12-30 11:08:16,211 Best validation result (greedy) at step 18000: 8.74 ppl.\n", "2019-12-30 11:08:35,323 dev bleu: 15.91 [Beam search decoding with beam size = 5 and alpha = 1.0]\n", "2019-12-30 11:08:35,325 Translations saved to: models/enurh_transformer/00018000.hyps.dev\n", "2019-12-30 11:09:15,949 test bleu: 28.82 [Beam search decoding with beam size = 5 and alpha = 1.0]\n", "2019-12-30 11:09:15,951 Translations saved to: models/enurh_transformer/00018000.hyps.test\n" ], "name": "stdout" } ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "MBoDS09JM807", "colab": { "base_uri": "https://localhost:8080/", "height": 34 }, "outputId": "1c2641ea-ee0f-4eb3-ef56-d44dc298e17c" }, "source": [ "# Copy the created models from the notebook storage to google drive for persistant storage \n", "#!cp -r joeynmt/models/${src}${tgt}_transformer/* \"$gdrive_path/models/${src}${tgt}_transformer/\"\n", "!cp -r joeynmt/models/${src}${tgt}_transformer/* drive/'My Drive'/masakhane/en-urh-baseline/models/enurh_transformer" ], "execution_count": 56, "outputs": [ { "output_type": "stream", "text": [ "cp: cannot create symbolic link 'drive/My Drive/masakhane/en-urh-baseline/models/enurh_transformer/best.ckpt': Operation not supported\n" ], "name": "stdout" } ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "n94wlrCjVc17", "colab": { "base_uri": "https://localhost:8080/", "height": 765 }, "outputId": "1e1eb57c-d239-4e91-fca2-4dbd6e652bf8" }, "source": [ "# Output our validation accuracy\n", "! cat \"$gdrive_path/models/${src}${tgt}_transformer/validations.txt\"" ], "execution_count": 57, "outputs": [ { "output_type": "stream", "text": [ "Steps: 1000\tLoss: 82894.25781\tPPL: 33.57090\tbleu: 1.38076\tLR: 0.00030000\t*\n", "Steps: 2000\tLoss: 71127.99219\tPPL: 20.38746\tbleu: 3.38906\tLR: 0.00030000\t*\n", "Steps: 3000\tLoss: 64387.64844\tPPL: 15.32086\tbleu: 5.88120\tLR: 0.00030000\t*\n", "Steps: 4000\tLoss: 60269.48047\tPPL: 12.86689\tbleu: 7.95364\tLR: 0.00030000\t*\n", "Steps: 5000\tLoss: 57490.78516\tPPL: 11.43726\tbleu: 9.62943\tLR: 0.00030000\t*\n", "Steps: 6000\tLoss: 55914.19141\tPPL: 10.69792\tbleu: 10.53420\tLR: 0.00030000\t*\n", "Steps: 7000\tLoss: 54344.22656\tPPL: 10.00918\tbleu: 11.26628\tLR: 0.00030000\t*\n", "Steps: 8000\tLoss: 53979.01953\tPPL: 9.85543\tbleu: 11.76212\tLR: 0.00030000\t*\n", "Steps: 9000\tLoss: 52996.83594\tPPL: 9.45355\tbleu: 12.13361\tLR: 0.00030000\t*\n", "Steps: 10000\tLoss: 52344.57031\tPPL: 9.19576\tbleu: 12.86276\tLR: 0.00030000\t*\n", "Steps: 11000\tLoss: 52315.73438\tPPL: 9.18453\tbleu: 13.28933\tLR: 0.00030000\t*\n", "Steps: 12000\tLoss: 51660.80469\tPPL: 8.93306\tbleu: 13.25257\tLR: 0.00030000\t*\n", "Steps: 13000\tLoss: 51232.82812\tPPL: 8.77247\tbleu: 13.75931\tLR: 0.00030000\t*\n", "Steps: 14000\tLoss: 51388.36328\tPPL: 8.83050\tbleu: 14.09364\tLR: 0.00030000\t\n", "Steps: 15000\tLoss: 51209.93750\tPPL: 8.76397\tbleu: 14.14855\tLR: 0.00030000\t*\n", "Steps: 16000\tLoss: 51274.94141\tPPL: 8.78815\tbleu: 14.42179\tLR: 0.00030000\t\n", "Steps: 17000\tLoss: 51369.29297\tPPL: 8.82336\tbleu: 14.14162\tLR: 0.00030000\t\n", "Steps: 18000\tLoss: 51154.03516\tPPL: 8.74322\tbleu: 14.77395\tLR: 0.00030000\t*\n", "Steps: 19000\tLoss: 51345.75781\tPPL: 8.81457\tbleu: 14.83231\tLR: 0.00030000\t\n", "Steps: 20000\tLoss: 51586.19141\tPPL: 8.90486\tbleu: 15.02134\tLR: 0.00030000\t\n", "Steps: 21000\tLoss: 51528.21094\tPPL: 8.88300\tbleu: 14.85186\tLR: 0.00030000\t\n", "Steps: 22000\tLoss: 52141.00000\tPPL: 9.11675\tbleu: 15.31632\tLR: 0.00030000\t\n", "Steps: 23000\tLoss: 51979.78516\tPPL: 9.05467\tbleu: 15.23640\tLR: 0.00030000\t\n", "Steps: 24000\tLoss: 52359.16016\tPPL: 9.20145\tbleu: 15.39359\tLR: 0.00021000\t\n", "Steps: 25000\tLoss: 52419.07031\tPPL: 9.22484\tbleu: 15.69031\tLR: 0.00021000\t\n", "Steps: 26000\tLoss: 52550.09375\tPPL: 9.27622\tbleu: 15.56656\tLR: 0.00021000\t\n", "Steps: 27000\tLoss: 52610.76172\tPPL: 9.30010\tbleu: 16.12662\tLR: 0.00021000\t\n", "Steps: 28000\tLoss: 53156.50781\tPPL: 9.51775\tbleu: 15.92374\tLR: 0.00021000\t\n", "Steps: 29000\tLoss: 53019.18750\tPPL: 9.46251\tbleu: 15.87944\tLR: 0.00021000\t\n", "Steps: 30000\tLoss: 53459.34375\tPPL: 9.64071\tbleu: 15.72534\tLR: 0.00014700\t\n", "Steps: 31000\tLoss: 53562.06250\tPPL: 9.68278\tbleu: 15.54623\tLR: 0.00014700\t\n", "Steps: 32000\tLoss: 53618.83984\tPPL: 9.70611\tbleu: 15.79375\tLR: 0.00014700\t\n", "Steps: 33000\tLoss: 53921.97266\tPPL: 9.83162\tbleu: 15.86053\tLR: 0.00014700\t\n", "Steps: 34000\tLoss: 54038.21094\tPPL: 9.88018\tbleu: 15.66958\tLR: 0.00014700\t\n", "Steps: 35000\tLoss: 54266.53906\tPPL: 9.97627\tbleu: 15.56806\tLR: 0.00014700\t\n", "Steps: 36000\tLoss: 54322.67969\tPPL: 10.00004\tbleu: 15.82182\tLR: 0.00010290\t\n", "Steps: 37000\tLoss: 54445.50391\tPPL: 10.05224\tbleu: 15.75498\tLR: 0.00010290\t\n", "Steps: 38000\tLoss: 54657.48828\tPPL: 10.14297\tbleu: 15.80186\tLR: 0.00010290\t\n", "Steps: 39000\tLoss: 54734.26562\tPPL: 10.17603\tbleu: 16.30163\tLR: 0.00010290\t\n", "Steps: 40000\tLoss: 54951.53125\tPPL: 10.27018\tbleu: 15.98095\tLR: 0.00010290\t\n", "Steps: 41000\tLoss: 55018.73438\tPPL: 10.29948\tbleu: 16.30284\tLR: 0.00010290\t\n", "Steps: 42000\tLoss: 55022.01953\tPPL: 10.30091\tbleu: 15.90083\tLR: 0.00007203\t\n", "Steps: 43000\tLoss: 55299.57031\tPPL: 10.42281\tbleu: 16.16090\tLR: 0.00007203\t\n", "Steps: 44000\tLoss: 55201.06250\tPPL: 10.37938\tbleu: 16.08263\tLR: 0.00007203\t\n" ], "name": "stdout" } ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "66WhRE9lIhoD", "colab": { "base_uri": "https://localhost:8080/", "height": 68 }, "outputId": "54606f68-573d-4f5c-af9a-cfe0249655bd" }, "source": [ "# Test our model\n", "! cd joeynmt; python3 -m joeynmt test \"$gdrive_path/models/${src}${tgt}_transformer/config.yaml\"" ], "execution_count": 58, "outputs": [ { "output_type": "stream", "text": [ "2019-12-30 13:49:27,762 Hello! This is Joey-NMT.\n", "2019-12-30 13:49:49,656 dev bleu: 15.91 [Beam search decoding with beam size = 5 and alpha = 1.0]\n", "2019-12-30 13:50:27,825 test bleu: 28.82 [Beam search decoding with beam size = 5 and alpha = 1.0]\n" ], "name": "stdout" } ] } ] }