{ "nbformat": 4, "nbformat_minor": 0, "metadata": { "accelerator": "GPU", "colab": { "name": "starter_notebook.ipynb", "provenance": [], "collapsed_sections": [], "toc_visible": true, "include_colab_link": true }, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.5.6" } }, "cells": [ { "cell_type": "markdown", "metadata": { "id": "view-in-github", "colab_type": "text" }, "source": [ "\"Open" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "Igc5itf-xMGj" }, "source": [ "# Masakhane - Machine Translation for African Languages (Using JoeyNMT)" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "x4fXCKCf36IK" }, "source": [ "## Note before beginning:\n", "### - The idea is that you should be able to make minimal changes to this in order to get SOME result for your own translation corpus. \n", "\n", "### - The tl;dr: Go to the **\"TODO\"** comments which will tell you what to update to get up and running\n", "\n", "### - If you actually want to have a clue what you're doing, read the text and peek at the links\n", "\n", "### - With 100 epochs, it should take around 7 hours to run in Google Colab\n", "\n", "### - Once you've gotten a result for your language, please attach and email your notebook that generated it to masakhanetranslation@gmail.com\n", "\n", "### - If you care enough and get a chance, doing a brief background on your language would be amazing. See examples in [(Martinus, 2019)](https://arxiv.org/abs/1906.05685)" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "l929HimrxS0a" }, "source": [ "## Retrieve your data & make a parallel corpus\n", "\n", "If you are wanting to use the JW300 data referenced on the Masakhane website or in our GitHub repo, you can use `opus-tools` to convert the data into a convenient format. `opus_read` from that package provides a convenient tool for reading the native aligned XML files and to convert them to TMX format. The tool can also be used to fetch relevant files from OPUS on the fly and to filter the data as necessary. [Read the documentation](https://pypi.org/project/opustools-pkg/) for more details.\n", "\n", "Once you have your corpus files in TMX format (an xml structure which will include the sentences in your target language and your source language in a single file), we recommend reading them into a pandas dataframe. Thankfully, Jade wrote a silly `tmx2dataframe` package which converts your tmx file to a pandas dataframe. " ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "oGRmDELn7Az0", "outputId": "56d0cd12-61f6-4a4b-d2b8-74abfff2f815", "colab": { "base_uri": "https://localhost:8080/", "height": 127 } }, "source": [ "from google.colab import drive\n", "drive.mount('/content/drive')" ], "execution_count": 1, "outputs": [ { "output_type": "stream", "text": [ "Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%3awg%3aoauth%3a2.0%3aoob&response_type=code&scope=email%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdocs.test%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive.photos.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fpeopleapi.readonly\n", "\n", "Enter your authorization code:\n", "··········\n", "Mounted at /content/drive\n" ], "name": "stdout" } ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "Cn3tgQLzUxwn", "colab": {} }, "source": [ "# TODO: Set your source and target languages. Keep in mind, these traditionally use language codes as found here:\n", "# These will also become the suffix's of all vocab and corpus files used throughout\n", "import os\n", "source_language = \"yo\"\n", "target_language = \"en\"\n", "lc = False # If True, lowercase the data.\n", "seed = 42 # Random seed for shuffling.\n", "tag = \"baseline\" # Give a unique name to your folder - this is to ensure you don't rewrite any models you've already submitted\n", "\n", "os.environ[\"src\"] = source_language # Sets them in bash as well, since we often use bash scripts\n", "os.environ[\"tgt\"] = target_language\n", "os.environ[\"tag\"] = tag\n", "\n", "# This will save it to a folder in our gdrive instead! \n", "!mkdir -p \"/content/drive/My Drive/masakhane/$src-$tgt-$tag\"\n", "g_drive_path = \"/content/drive/My Drive/masakhane/%s-%s-%s\" % (source_language, target_language, tag)\n", "os.environ[\"gdrive_path\"] = g_drive_path\n", "models_path = '%s/models/%s%s_transformer'% (g_drive_path, source_language, target_language)\n", "# model temporary directory for training\n", "model_temp_dir = \"/content/drive/My Drive/masakhane/model-temp\"\n", "# model permanent storage on the drive\n", "!mkdir -p \"$gdrive_path/models/${src}${tgt}_transformer/\"" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "kBSgJHEw7Nvx", "outputId": "6e14d6cf-9290-4fc3-cb32-3deb7224255a", "colab": { "base_uri": "https://localhost:8080/", "height": 34 } }, "source": [ "!echo $gdrive_path" ], "execution_count": 0, "outputs": [ { "output_type": "stream", "text": [ "/content/drive/My Drive/masakhane/yo-en-baseline\n" ], "name": "stdout" } ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "gA75Fs9ys8Y9", "outputId": "83fb02a5-bd6b-4b83-b155-078ae07ded95", "colab": { "base_uri": "https://localhost:8080/", "height": 102 } }, "source": [ "#TODO: Skip for retrain\n", "# Install opus-tools\n", "! pip install opustools-pkg " ], "execution_count": 0, "outputs": [ { "output_type": "stream", "text": [ "Collecting opustools-pkg\n", "\u001b[?25l Downloading https://files.pythonhosted.org/packages/6c/9f/e829a0cceccc603450cd18e1ff80807b6237a88d9a8df2c0bb320796e900/opustools_pkg-0.0.52-py3-none-any.whl (80kB)\n", "\u001b[K |████████████████████████████████| 81kB 9.5MB/s \n", "\u001b[?25hInstalling collected packages: opustools-pkg\n", "Successfully installed opustools-pkg-0.0.52\n" ], "name": "stdout" } ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "xq-tDZVks7ZD", "outputId": "12ccab59-4d5d-4e28-8163-4a39fe17f233", "colab": { "base_uri": "https://localhost:8080/", "height": 221 } }, "source": [ "#TODO: Skip for retrain\n", "# Downloading our corpus\n", "! opus_read -d JW300 -s $src -t $tgt -wm moses -w jw300.$src jw300.$tgt -q\n", "\n", "# extract the corpus file\n", "! gunzip JW300_latest_xml_$src-$tgt.xml.gz" ], "execution_count": 0, "outputs": [ { "output_type": "stream", "text": [ "\n", "Alignment file /proj/nlpl/data/OPUS/JW300/latest/xml/en-yo.xml.gz not found. The following files are available for downloading:\n", "\n", " 4 MB https://object.pouta.csc.fi/OPUS-JW300/v1/xml/en-yo.xml.gz\n", " 263 MB https://object.pouta.csc.fi/OPUS-JW300/v1/xml/en.zip\n", " 58 MB https://object.pouta.csc.fi/OPUS-JW300/v1/xml/yo.zip\n", "\n", " 325 MB Total size\n", "./JW300_latest_xml_en-yo.xml.gz ... 100% of 4 MB\n", "./JW300_latest_xml_en.zip ... 100% of 263 MB\n", "./JW300_latest_xml_yo.zip ... 100% of 58 MB\n", "gzip: JW300_latest_xml_yo-en.xml.gz: No such file or directory\n" ], "name": "stdout" } ] }, { "cell_type": "code", "metadata": { "id": "doraUei7W31h", "colab_type": "code", "colab": {} }, "source": [ "# extract the corpus file\n", "! gunzip JW300_latest_xml_$tgt-$src.xml.gz" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "n48GDRnP8y2G", "colab_type": "code", "outputId": "4f30b67f-6d04-4dd9-fa2a-859eb5a076b2", "colab": { "base_uri": "https://localhost:8080/", "height": 578 } }, "source": [ "#TODO: Skip for retrain\n", "# Download the global test set.\n", "! wget https://raw.githubusercontent.com/juliakreutzer/masakhane/master/jw300_utils/test/test.en-any.en\n", " \n", "# And the specific test set for this language pair.\n", "os.environ[\"trg\"] = target_language \n", "os.environ[\"src\"] = source_language \n", "\n", "! wget https://raw.githubusercontent.com/juliakreutzer/masakhane/master/jw300_utils/test/test.en-$src.en \n", "! mv test.en-$src.en test.en\n", "! wget https://raw.githubusercontent.com/juliakreutzer/masakhane/master/jw300_utils/test/test.en-$src.$src \n", "! mv test.en-$src.$src test.$src" ], "execution_count": 0, "outputs": [ { "output_type": "stream", "text": [ "--2020-04-07 20:43:18-- https://raw.githubusercontent.com/juliakreutzer/masakhane/master/jw300_utils/test/test.en-any.en\n", "Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.0.133, 151.101.64.133, 151.101.128.133, ...\n", "Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.0.133|:443... connected.\n", "HTTP request sent, awaiting response... 200 OK\n", "Length: 277791 (271K) [text/plain]\n", "Saving to: ‘test.en-any.en’\n", "\n", "\rtest.en-any.en 0%[ ] 0 --.-KB/s \rtest.en-any.en 100%[===================>] 271.28K --.-KB/s in 0.008s \n", "\n", "2020-04-07 20:43:19 (32.5 MB/s) - ‘test.en-any.en’ saved [277791/277791]\n", "\n", "--2020-04-07 20:43:22-- https://raw.githubusercontent.com/juliakreutzer/masakhane/master/jw300_utils/test/test.en-yo.en\n", "Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.0.133, 151.101.64.133, 151.101.128.133, ...\n", "Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.0.133|:443... connected.\n", "HTTP request sent, awaiting response... 200 OK\n", "Length: 201994 (197K) [text/plain]\n", "Saving to: ‘test.en-yo.en’\n", "\n", "test.en-yo.en 100%[===================>] 197.26K --.-KB/s in 0.005s \n", "\n", "2020-04-07 20:43:22 (36.5 MB/s) - ‘test.en-yo.en’ saved [201994/201994]\n", "\n", "--2020-04-07 20:43:27-- https://raw.githubusercontent.com/juliakreutzer/masakhane/master/jw300_utils/test/test.en-yo.yo\n", "Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.0.133, 151.101.64.133, 151.101.128.133, ...\n", "Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.0.133|:443... connected.\n", "HTTP request sent, awaiting response... 200 OK\n", "Length: 280073 (274K) [text/plain]\n", "Saving to: ‘test.en-yo.yo’\n", "\n", "test.en-yo.yo 100%[===================>] 273.51K --.-KB/s in 0.007s \n", "\n", "2020-04-07 20:43:29 (36.3 MB/s) - ‘test.en-yo.yo’ saved [280073/280073]\n", "\n" ], "name": "stdout" } ] }, { "cell_type": "code", "metadata": { "id": "NqDG-CI28y2L", "colab_type": "code", "outputId": "6a52eb39-aff5-41b8-d427-aa128cf73e76", "colab": { "base_uri": "https://localhost:8080/", "height": 34 } }, "source": [ "#TODO: Skip for retrain\n", "# Read the test data to filter from train and dev splits.\n", "# Store english portion in set for quick filtering checks.\n", "en_test_sents = set()\n", "filter_test_sents = \"test.en-any.en\"\n", "j = 0\n", "with open(filter_test_sents) as f:\n", " for line in f:\n", " en_test_sents.add(line.strip())\n", " j += 1\n", "print('Loaded {} global test sentences to filter from the training/dev data.'.format(j))" ], "execution_count": 0, "outputs": [ { "output_type": "stream", "text": [ "Loaded 3571 global test sentences to filter from the training/dev data.\n" ], "name": "stdout" } ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "3CNdwLBCfSIl", "outputId": "c7e16fbb-ba42-41b2-b73b-49df100de222", "colab": { "base_uri": "https://localhost:8080/", "height": 159 } }, "source": [ "#TODO: Skip for retrain\n", "import pandas as pd\n", "\n", "# TMX file to dataframe\n", "source_file = 'jw300.' + source_language\n", "target_file = 'jw300.' + target_language\n", "\n", "source = []\n", "target = []\n", "skip_lines = [] # Collect the line numbers of the source portion to skip the same lines for the target portion.\n", "with open(source_file) as f:\n", " for i, line in enumerate(f):\n", " # Skip sentences that are contained in the test set.\n", " if line.strip() not in en_test_sents:\n", " source.append(line.strip())\n", " else:\n", " skip_lines.append(i) \n", "with open(target_file) as f:\n", " for j, line in enumerate(f):\n", " # Only add to corpus if corresponding source was not skipped.\n", " if j not in skip_lines:\n", " target.append(line.strip())\n", " \n", "print('Loaded data and skipped {}/{} lines since contained in test set.'.format(len(skip_lines), i))\n", " \n", "df = pd.DataFrame(zip(source, target), columns=['source_sentence', 'target_sentence'])\n", "# if you get TypeError: data argument can't be an iterator is because of your zip version run this below\n", "#df = pd.DataFrame(list(zip(source, target)), columns=['source_sentence', 'target_sentence'])\n", "df.head(3)" ], "execution_count": 0, "outputs": [ { "output_type": "stream", "text": [ "Loaded data and skipped 1025/474986 lines since contained in test set.\n" ], "name": "stdout" }, { "output_type": "execute_result", "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
source_sentencetarget_sentence
0Lílo Àkàbà — Ǹjẹ́ O Máa Ń Ṣe Àyẹ̀wò Wọ̀nyí Tó...Using Ladders — Do You Make These Safety Checks ?
1Látọwọ́ akọ̀ròyìn Jí !By Awake !
2ní Irelandcorrespondent in Ireland
\n", "
" ], "text/plain": [ " source_sentence target_sentence\n", "0 Lílo Àkàbà — Ǹjẹ́ O Máa Ń Ṣe Àyẹ̀wò Wọ̀nyí Tó... Using Ladders — Do You Make These Safety Checks ?\n", "1 Látọwọ́ akọ̀ròyìn Jí ! By Awake !\n", "2 ní Ireland correspondent in Ireland" ] }, "metadata": { "tags": [] }, "execution_count": 9 } ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "YkuK3B4p2AkN" }, "source": [ "## Pre-processing and export\n", "\n", "It is generally a good idea to remove duplicate translations and conflicting translations from the corpus. In practice, these public corpora include some number of these that need to be cleaned.\n", "\n", "In addition we will split our data into dev/test/train and export to the filesystem." ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "M_2ouEOH1_1q", "outputId": "06f01841-5cdf-4103-bc12-fb7bce7bc4d9", "colab": { "base_uri": "https://localhost:8080/", "height": 187 } }, "source": [ "#TODO: Skip for retrain\n", "# drop duplicate translations\n", "df_pp = df.drop_duplicates()\n", "\n", "# drop conflicting translations\n", "# (this is optional and something that you might want to comment out \n", "# depending on the size of your corpus)\n", "df_pp.drop_duplicates(subset='source_sentence', inplace=True)\n", "df_pp.drop_duplicates(subset='target_sentence', inplace=True)\n", "\n", "# Shuffle the data to remove bias in dev set selection.\n", "df_pp = df_pp.sample(frac=1, random_state=seed).reset_index(drop=True)" ], "execution_count": 0, "outputs": [ { "output_type": "stream", "text": [ "/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:6: SettingWithCopyWarning: \n", "A value is trying to be set on a copy of a slice from a DataFrame\n", "\n", "See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n", " \n", "/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:7: SettingWithCopyWarning: \n", "A value is trying to be set on a copy of a slice from a DataFrame\n", "\n", "See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n", " import sys\n" ], "name": "stderr" } ] }, { "cell_type": "code", "metadata": { "id": "Z_1BwAApEtMk", "colab_type": "code", "outputId": "bf6ad039-dd32-4f7b-9441-cf00e769325c", "colab": { "base_uri": "https://localhost:8080/", "height": 1000 } }, "source": [ "#TODO: Skip for retrain\n", "# Install fuzzy wuzzy to remove \"almost duplicate\" sentences in the\n", "# test and training sets.\n", "! pip install fuzzywuzzy\n", "! pip install python-Levenshtein\n", "import time\n", "from fuzzywuzzy import process\n", "import numpy as np\n", "\n", "# reset the index of the training set after previous filtering\n", "df_pp.reset_index(drop=False, inplace=True)\n", "\n", "# Remove samples from the training data set if they \"almost overlap\" with the\n", "# samples in the test set.\n", "\n", "# Filtering function. Adjust pad to narrow down the candidate matches to\n", "# within a certain length of characters of the given sample.\n", "def fuzzfilter(sample, candidates, pad):\n", " candidates = [x for x in candidates if len(x) <= len(sample)+pad and len(x) >= len(sample)-pad] \n", " if len(candidates) > 0:\n", " return process.extractOne(sample, candidates)[1]\n", " else:\n", " return np.nan\n", "\n", "# NOTE - This might run slow depending on the size of your training set. We are\n", "# printing some information to help you track how long it would take. \n", "scores = []\n", "start_time = time.time()\n", "for idx, row in df_pp.iterrows():\n", " scores.append(fuzzfilter(row['source_sentence'], list(en_test_sents), 5))\n", " if idx % 1000 == 0:\n", " hours, rem = divmod(time.time() - start_time, 3600)\n", " minutes, seconds = divmod(rem, 60)\n", " print(\"{:0>2}:{:0>2}:{:05.2f}\".format(int(hours),int(minutes),seconds), \"%0.2f percent complete\" % (100.0*float(idx)/float(len(df_pp))))\n", "\n", "# Filter out \"almost overlapping samples\"\n", "df_pp['scores'] = scores\n", "df_pp = df_pp[df_pp['scores'] < 95]" ], "execution_count": 0, "outputs": [ { "output_type": "stream", "text": [ "Collecting fuzzywuzzy\n", " Downloading https://files.pythonhosted.org/packages/43/ff/74f23998ad2f93b945c0309f825be92e04e0348e062026998b5eefef4c33/fuzzywuzzy-0.18.0-py2.py3-none-any.whl\n", "Installing collected packages: fuzzywuzzy\n", "Successfully installed fuzzywuzzy-0.18.0\n", "Collecting python-Levenshtein\n", "\u001b[?25l Downloading https://files.pythonhosted.org/packages/42/a9/d1785c85ebf9b7dfacd08938dd028209c34a0ea3b1bcdb895208bd40a67d/python-Levenshtein-0.12.0.tar.gz (48kB)\n", "\u001b[K |████████████████████████████████| 51kB 8.7MB/s \n", "\u001b[?25hRequirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from python-Levenshtein) (46.1.3)\n", "Building wheels for collected packages: python-Levenshtein\n", " Building wheel for python-Levenshtein (setup.py) ... \u001b[?25l\u001b[?25hdone\n", " Created wheel for python-Levenshtein: filename=python_Levenshtein-0.12.0-cp36-cp36m-linux_x86_64.whl size=144794 sha256=0233a2657a58078318fa67995ba97920d36a754bc5c5ae6bd1044938e414d250\n", " Stored in directory: /root/.cache/pip/wheels/de/c2/93/660fd5f7559049268ad2dc6d81c4e39e9e36518766eaf7e342\n", "Successfully built python-Levenshtein\n", "Installing collected packages: python-Levenshtein\n", "Successfully installed python-Levenshtein-0.12.0\n", "00:00:00.16 0.00 percent complete\n", "00:00:23.21 0.24 percent complete\n", "00:00:46.27 0.48 percent complete\n", "00:01:09.18 0.71 percent complete\n", "00:01:31.34 0.95 percent complete\n" ], "name": "stdout" }, { "output_type": "stream", "text": [ "WARNING:root:Applied processor reduces input query to empty string, all comparisons will have score 0. [Query: '↓']\n" ], "name": "stderr" }, { "output_type": "stream", "text": [ "00:01:54.26 1.19 percent complete\n", "00:02:17.83 1.43 percent complete\n", "00:02:40.41 1.66 percent complete\n", "00:03:03.84 1.90 percent complete\n", "00:03:26.18 2.14 percent complete\n", "00:03:49.14 2.38 percent complete\n", "00:04:11.64 2.61 percent complete\n", "00:04:34.39 2.85 percent complete\n", "00:04:58.78 3.09 percent complete\n", "00:05:20.46 3.33 percent complete\n", "00:05:43.36 3.56 percent complete\n", "00:06:06.61 3.80 percent complete\n", "00:06:28.24 4.04 percent complete\n", "00:06:50.05 4.28 percent complete\n" ], "name": "stdout" }, { "output_type": "stream", "text": [ "WARNING:root:Applied processor reduces input query to empty string, all comparisons will have score 0. [Query: '↑ ↑']\n" ], "name": "stderr" }, { "output_type": "stream", "text": [ "00:07:14.21 4.51 percent complete\n", "00:07:36.24 4.75 percent complete\n", "00:07:58.82 4.99 percent complete\n", "00:08:20.53 5.23 percent complete\n", "00:08:42.12 5.46 percent complete\n", "00:09:05.54 5.70 percent complete\n", "00:09:27.72 5.94 percent complete\n", "00:09:49.86 6.18 percent complete\n", "00:10:14.16 6.41 percent complete\n", "00:10:37.63 6.65 percent complete\n", "00:10:58.94 6.89 percent complete\n", "00:11:21.80 7.13 percent complete\n", "00:11:45.94 7.36 percent complete\n", "00:12:11.02 7.60 percent complete\n", "00:12:34.00 7.84 percent complete\n", "00:12:55.73 8.08 percent complete\n", "00:13:18.24 8.31 percent complete\n", "00:13:41.73 8.55 percent complete\n", "00:14:04.32 8.79 percent complete\n", "00:14:28.38 9.03 percent complete\n", "00:14:50.08 9.27 percent complete\n", "00:15:12.34 9.50 percent complete\n", "00:15:36.21 9.74 percent complete\n", "00:15:59.59 9.98 percent complete\n", "00:16:21.11 10.22 percent complete\n", "00:16:44.50 10.45 percent complete\n", "00:17:07.51 10.69 percent complete\n", "00:17:31.10 10.93 percent complete\n", "00:17:53.86 11.17 percent complete\n", "00:18:16.59 11.40 percent complete\n", "00:18:38.77 11.64 percent complete\n" ], "name": "stdout" }, { "output_type": "stream", "text": [ "WARNING:root:Applied processor reduces input query to empty string, all comparisons will have score 0. [Query: '”']\n" ], "name": "stderr" }, { "output_type": "stream", "text": [ "00:19:00.18 11.88 percent complete\n", "00:19:23.67 12.12 percent complete\n", "00:19:46.78 12.35 percent complete\n", "00:20:09.21 12.59 percent complete\n", "00:20:31.55 12.83 percent complete\n", "00:20:54.99 13.07 percent complete\n" ], "name": "stdout" }, { "output_type": "stream", "text": [ "WARNING:root:Applied processor reduces input query to empty string, all comparisons will have score 0. [Query: '․ ․']\n" ], "name": "stderr" }, { "output_type": "stream", "text": [ "00:21:16.35 13.30 percent complete\n", "00:21:40.35 13.54 percent complete\n", "00:22:02.52 13.78 percent complete\n", "00:22:26.25 14.02 percent complete\n", "00:22:48.93 14.25 percent complete\n", "00:23:10.91 14.49 percent complete\n" ], "name": "stdout" }, { "output_type": "stream", "text": [ "WARNING:root:Applied processor reduces input query to empty string, all comparisons will have score 0. [Query: '']\n" ], "name": "stderr" }, { "output_type": "stream", "text": [ "00:23:33.74 14.73 percent complete\n", "00:23:56.98 14.97 percent complete\n", "00:24:18.77 15.20 percent complete\n", "00:24:42.38 15.44 percent complete\n", "00:25:06.24 15.68 percent complete\n", "00:25:29.38 15.92 percent complete\n", "00:25:51.90 16.15 percent complete\n", "00:26:14.36 16.39 percent complete\n", "00:26:38.03 16.63 percent complete\n", "00:27:00.03 16.87 percent complete\n", "00:27:22.03 17.10 percent complete\n", "00:27:46.56 17.34 percent complete\n", "00:28:08.08 17.58 percent complete\n", "00:28:31.27 17.82 percent complete\n", "00:28:53.52 18.05 percent complete\n", "00:29:15.49 18.29 percent complete\n", "00:29:36.54 18.53 percent complete\n", "00:29:57.27 18.77 percent complete\n", "00:30:19.39 19.01 percent complete\n", "00:30:41.44 19.24 percent complete\n", "00:31:02.77 19.48 percent complete\n", "00:31:23.43 19.72 percent complete\n", "00:31:45.61 19.96 percent complete\n", "00:32:06.92 20.19 percent complete\n", "00:32:28.40 20.43 percent complete\n", "00:32:50.66 20.67 percent complete\n" ], "name": "stdout" }, { "output_type": "stream", "text": [ "WARNING:root:Applied processor reduces input query to empty string, all comparisons will have score 0. [Query: '. .']\n" ], "name": "stderr" }, { "output_type": "stream", "text": [ "00:33:12.62 20.91 percent complete\n", "00:33:35.20 21.14 percent complete\n", "00:33:57.20 21.38 percent complete\n", "00:34:18.99 21.62 percent complete\n", "00:34:41.33 21.86 percent complete\n", "00:35:02.17 22.09 percent complete\n", "00:35:23.71 22.33 percent complete\n" ], "name": "stdout" }, { "output_type": "stream", "text": [ "WARNING:root:Applied processor reduces input query to empty string, all comparisons will have score 0. [Query: '□ ․ ․ ․ ․ ․']\n" ], "name": "stderr" }, { "output_type": "stream", "text": [ "00:35:45.05 22.57 percent complete\n", "00:36:06.62 22.81 percent complete\n", "00:36:28.70 23.04 percent complete\n", "00:36:49.11 23.28 percent complete\n", "00:37:11.26 23.52 percent complete\n", "00:37:32.83 23.76 percent complete\n", "00:37:54.10 23.99 percent complete\n" ], "name": "stdout" }, { "output_type": "stream", "text": [ "WARNING:root:Applied processor reduces input query to empty string, all comparisons will have score 0. [Query: '⇩ ⇩']\n" ], "name": "stderr" }, { "output_type": "stream", "text": [ "00:38:15.52 24.23 percent complete\n", "00:38:37.19 24.47 percent complete\n", "00:38:58.69 24.71 percent complete\n", "00:39:21.27 24.94 percent complete\n", "00:39:41.91 25.18 percent complete\n", "00:40:04.18 25.42 percent complete\n", "00:40:25.68 25.66 percent complete\n", "00:40:46.67 25.89 percent complete\n", "00:41:07.00 26.13 percent complete\n", "00:41:28.87 26.37 percent complete\n", "00:41:49.80 26.61 percent complete\n", "00:42:11.61 26.84 percent complete\n", "00:42:32.93 27.08 percent complete\n" ], "name": "stdout" }, { "output_type": "stream", "text": [ "WARNING:root:Applied processor reduces input query to empty string, all comparisons will have score 0. [Query: '▾ ▾ ▾ ▾']\n" ], "name": "stderr" }, { "output_type": "stream", "text": [ "00:42:54.64 27.32 percent complete\n", "00:43:15.73 27.56 percent complete\n", "00:43:36.50 27.80 percent complete\n", "00:43:57.98 28.03 percent complete\n", "00:44:20.30 28.27 percent complete\n", "00:44:41.95 28.51 percent complete\n", "00:45:02.21 28.75 percent complete\n", "00:45:23.14 28.98 percent complete\n", "00:45:44.47 29.22 percent complete\n", "00:46:05.40 29.46 percent complete\n", "00:46:26.27 29.70 percent complete\n", "00:46:47.76 29.93 percent complete\n", "00:47:09.16 30.17 percent complete\n", "00:47:30.80 30.41 percent complete\n", "00:47:51.82 30.65 percent complete\n", "00:48:12.29 30.88 percent complete\n", "00:48:33.14 31.12 percent complete\n", "00:48:54.88 31.36 percent complete\n", "00:49:15.91 31.60 percent complete\n", "00:49:38.08 31.83 percent complete\n", "00:49:58.85 32.07 percent complete\n", "00:50:18.80 32.31 percent complete\n", "00:50:40.71 32.55 percent complete\n", "00:51:02.48 32.78 percent complete\n", "00:51:24.31 33.02 percent complete\n", "00:51:44.98 33.26 percent complete\n", "00:52:05.36 33.50 percent complete\n", "00:52:26.49 33.73 percent complete\n", "00:52:47.55 33.97 percent complete\n", "00:53:07.69 34.21 percent complete\n", "00:53:28.41 34.45 percent complete\n", "00:53:50.19 34.68 percent complete\n", "00:54:10.86 34.92 percent complete\n", "00:54:32.19 35.16 percent complete\n" ], "name": "stdout" }, { "output_type": "stream", "text": [ "WARNING:root:Applied processor reduces input query to empty string, all comparisons will have score 0. [Query: '” *']\n" ], "name": "stderr" }, { "output_type": "stream", "text": [ "00:54:53.47 35.40 percent complete\n", "00:55:15.29 35.63 percent complete\n", "00:55:35.80 35.87 percent complete\n", "00:55:56.41 36.11 percent complete\n", "00:56:16.88 36.35 percent complete\n", "00:56:38.90 36.59 percent complete\n", "00:57:00.41 36.82 percent complete\n", "00:57:22.65 37.06 percent complete\n", "00:57:42.91 37.30 percent complete\n", "00:58:04.02 37.54 percent complete\n", "00:58:25.08 37.77 percent complete\n", "00:58:46.45 38.01 percent complete\n", "00:59:07.79 38.25 percent complete\n", "00:59:29.68 38.49 percent complete\n", "00:59:51.17 38.72 percent complete\n", "01:00:13.82 38.96 percent complete\n", "01:00:34.39 39.20 percent complete\n", "01:00:54.94 39.44 percent complete\n", "01:01:16.20 39.67 percent complete\n", "01:01:37.07 39.91 percent complete\n" ], "name": "stdout" }, { "output_type": "stream", "text": [ "WARNING:root:Applied processor reduces input query to empty string, all comparisons will have score 0. [Query: '*']\n" ], "name": "stderr" }, { "output_type": "stream", "text": [ "01:01:57.74 40.15 percent complete\n" ], "name": "stdout" }, { "output_type": "stream", "text": [ "WARNING:root:Applied processor reduces input query to empty string, all comparisons will have score 0. [Query: '→ ․ ․ ․ ․ ․']\n" ], "name": "stderr" }, { "output_type": "stream", "text": [ "01:02:19.83 40.39 percent complete\n", "01:02:41.15 40.62 percent complete\n", "01:03:01.95 40.86 percent complete\n", "01:03:22.18 41.10 percent complete\n", "01:03:43.12 41.34 percent complete\n", "01:04:04.92 41.57 percent complete\n", "01:04:25.17 41.81 percent complete\n", "01:04:45.75 42.05 percent complete\n", "01:05:08.14 42.29 percent complete\n", "01:05:29.03 42.52 percent complete\n", "01:05:50.39 42.76 percent complete\n", "01:06:11.45 43.00 percent complete\n", "01:06:31.60 43.24 percent complete\n", "01:06:52.64 43.47 percent complete\n", "01:07:13.99 43.71 percent complete\n", "01:07:34.49 43.95 percent complete\n", "01:07:54.70 44.19 percent complete\n", "01:08:16.02 44.42 percent complete\n", "01:08:36.91 44.66 percent complete\n", "01:08:57.70 44.90 percent complete\n", "01:09:18.69 45.14 percent complete\n", "01:09:40.28 45.37 percent complete\n", "01:10:00.90 45.61 percent complete\n", "01:10:21.96 45.85 percent complete\n", "01:10:43.41 46.09 percent complete\n", "01:11:05.41 46.33 percent complete\n" ], "name": "stdout" }, { "output_type": "stream", "text": [ "WARNING:root:Applied processor reduces input query to empty string, all comparisons will have score 0. [Query: '↓ ↓ ↓ ↓']\n" ], "name": "stderr" }, { "output_type": "stream", "text": [ "01:11:26.97 46.56 percent complete\n", "01:11:48.78 46.80 percent complete\n", "01:12:10.30 47.04 percent complete\n", "01:12:31.49 47.28 percent complete\n", "01:12:52.69 47.51 percent complete\n", "01:13:13.70 47.75 percent complete\n", "01:13:35.44 47.99 percent complete\n", "01:13:57.51 48.23 percent complete\n", "01:14:18.13 48.46 percent complete\n", "01:14:39.48 48.70 percent complete\n", "01:14:59.74 48.94 percent complete\n", "01:15:20.85 49.18 percent complete\n", "01:15:41.20 49.41 percent complete\n", "01:16:02.37 49.65 percent complete\n", "01:16:22.81 49.89 percent complete\n", "01:16:44.21 50.13 percent complete\n", "01:17:05.71 50.36 percent complete\n", "01:17:25.84 50.60 percent complete\n", "01:17:46.60 50.84 percent complete\n", "01:18:08.16 51.08 percent complete\n", "01:18:28.46 51.31 percent complete\n", "01:18:49.29 51.55 percent complete\n", "01:19:09.98 51.79 percent complete\n", "01:19:32.26 52.03 percent complete\n" ], "name": "stdout" }, { "output_type": "stream", "text": [ "WARNING:root:Applied processor reduces input query to empty string, all comparisons will have score 0. [Query: '↑ ↑ ↑']\n" ], "name": "stderr" }, { "output_type": "stream", "text": [ "01:19:53.50 52.26 percent complete\n", "01:20:13.37 52.50 percent complete\n", "01:20:34.80 52.74 percent complete\n", "01:20:56.58 52.98 percent complete\n", "01:21:18.52 53.21 percent complete\n", "01:21:40.49 53.45 percent complete\n", "01:22:01.68 53.69 percent complete\n", "01:22:23.90 53.93 percent complete\n", "01:22:44.35 54.16 percent complete\n", "01:23:04.80 54.40 percent complete\n", "01:23:24.84 54.64 percent complete\n", "01:23:45.69 54.88 percent complete\n", "01:24:07.79 55.12 percent complete\n", "01:24:29.13 55.35 percent complete\n", "01:24:50.70 55.59 percent complete\n", "01:25:11.89 55.83 percent complete\n", "01:25:32.91 56.07 percent complete\n", "01:25:53.99 56.30 percent complete\n", "01:26:14.87 56.54 percent complete\n", "01:26:35.53 56.78 percent complete\n", "01:26:57.28 57.02 percent complete\n", "01:27:18.33 57.25 percent complete\n", "01:27:39.39 57.49 percent complete\n", "01:28:02.07 57.73 percent complete\n", "01:28:23.27 57.97 percent complete\n" ], "name": "stdout" }, { "output_type": "stream", "text": [ "WARNING:root:Applied processor reduces input query to empty string, all comparisons will have score 0. [Query: '— ― ― ― ― ― ― ―']\n" ], "name": "stderr" }, { "output_type": "stream", "text": [ "01:28:44.65 58.20 percent complete\n", "01:29:05.16 58.44 percent complete\n", "01:29:26.17 58.68 percent complete\n", "01:29:47.86 58.92 percent complete\n", "01:30:09.04 59.15 percent complete\n", "01:30:30.36 59.39 percent complete\n", "01:30:53.36 59.63 percent complete\n", "01:31:13.33 59.87 percent complete\n", "01:31:34.13 60.10 percent complete\n", "01:31:53.96 60.34 percent complete\n", "01:32:15.21 60.58 percent complete\n", "01:32:36.49 60.82 percent complete\n", "01:32:57.69 61.05 percent complete\n", "01:33:18.32 61.29 percent complete\n", "01:33:39.56 61.53 percent complete\n", "01:34:01.27 61.77 percent complete\n" ], "name": "stdout" }, { "output_type": "stream", "text": [ "WARNING:root:Applied processor reduces input query to empty string, all comparisons will have score 0. [Query: '․ ․ ․ ․ ․ ․ ․ ․ ․ ․']\n" ], "name": "stderr" }, { "output_type": "stream", "text": [ "01:34:22.57 62.00 percent complete\n", "01:34:43.88 62.24 percent complete\n", "01:35:05.39 62.48 percent complete\n", "01:35:26.08 62.72 percent complete\n", "01:35:46.82 62.95 percent complete\n", "01:36:07.74 63.19 percent complete\n", "01:36:29.62 63.43 percent complete\n", "01:36:50.09 63.67 percent complete\n", "01:37:11.16 63.91 percent complete\n" ], "name": "stdout" }, { "output_type": "stream", "text": [ "WARNING:root:Applied processor reduces input query to empty string, all comparisons will have score 0. [Query: '\\']\n" ], "name": "stderr" }, { "output_type": "stream", "text": [ "01:37:31.54 64.14 percent complete\n", "01:37:51.88 64.38 percent complete\n", "01:38:12.26 64.62 percent complete\n", "01:38:34.38 64.86 percent complete\n" ], "name": "stdout" }, { "output_type": "stream", "text": [ "WARNING:root:Applied processor reduces input query to empty string, all comparisons will have score 0. [Query: '↓ → →']\n" ], "name": "stderr" }, { "output_type": "stream", "text": [ "01:38:54.92 65.09 percent complete\n", "01:39:16.11 65.33 percent complete\n", "01:39:37.93 65.57 percent complete\n" ], "name": "stdout" }, { "output_type": "stream", "text": [ "WARNING:root:Applied processor reduces input query to empty string, all comparisons will have score 0. [Query: '▸']\n" ], "name": "stderr" }, { "output_type": "stream", "text": [ "01:39:58.61 65.81 percent complete\n", "01:40:19.11 66.04 percent complete\n", "01:40:39.11 66.28 percent complete\n", "01:40:59.75 66.52 percent complete\n", "01:41:20.04 66.76 percent complete\n", "01:41:41.59 66.99 percent complete\n", "01:42:02.39 67.23 percent complete\n", "01:42:24.29 67.47 percent complete\n", "01:42:46.35 67.71 percent complete\n", "01:43:06.57 67.94 percent complete\n", "01:43:28.45 68.18 percent complete\n", "01:43:48.73 68.42 percent complete\n", "01:44:09.76 68.66 percent complete\n" ], "name": "stdout" }, { "output_type": "stream", "text": [ "WARNING:root:Applied processor reduces input query to empty string, all comparisons will have score 0. [Query: '↓ ↑ ↑']\n" ], "name": "stderr" }, { "output_type": "stream", "text": [ "01:44:31.08 68.89 percent complete\n", "01:44:51.88 69.13 percent complete\n", "01:45:13.94 69.37 percent complete\n" ], "name": "stdout" }, { "output_type": "stream", "text": [ "WARNING:root:Applied processor reduces input query to empty string, all comparisons will have score 0. [Query: '↑']\n" ], "name": "stderr" }, { "output_type": "stream", "text": [ "01:45:35.30 69.61 percent complete\n", "01:45:55.85 69.84 percent complete\n", "01:46:17.32 70.08 percent complete\n", "01:46:38.50 70.32 percent complete\n", "01:46:59.41 70.56 percent complete\n", "01:47:20.42 70.79 percent complete\n", "01:47:41.07 71.03 percent complete\n", "01:48:01.68 71.27 percent complete\n", "01:48:23.01 71.51 percent complete\n", "01:48:43.90 71.74 percent complete\n", "01:49:05.56 71.98 percent complete\n", "01:49:26.41 72.22 percent complete\n", "01:49:47.57 72.46 percent complete\n", "01:50:08.68 72.69 percent complete\n", "01:50:30.33 72.93 percent complete\n", "01:50:51.63 73.17 percent complete\n", "01:51:12.99 73.41 percent complete\n", "01:51:34.45 73.65 percent complete\n", "01:51:55.31 73.88 percent complete\n", "01:52:15.55 74.12 percent complete\n", "01:52:37.39 74.36 percent complete\n", "01:52:57.69 74.60 percent complete\n", "01:53:18.77 74.83 percent complete\n", "01:53:40.75 75.07 percent complete\n", "01:54:01.98 75.31 percent complete\n", "01:54:22.74 75.55 percent complete\n", "01:54:43.95 75.78 percent complete\n", "01:55:04.66 76.02 percent complete\n", "01:55:25.50 76.26 percent complete\n", "01:55:46.33 76.50 percent complete\n", "01:56:07.53 76.73 percent complete\n", "01:56:29.66 76.97 percent complete\n" ], "name": "stdout" }, { "output_type": "stream", "text": [ "WARNING:root:Applied processor reduces input query to empty string, all comparisons will have score 0. [Query: '↓ ↓ ↓']\n" ], "name": "stderr" }, { "output_type": "stream", "text": [ "01:56:50.14 77.21 percent complete\n", "01:57:10.67 77.45 percent complete\n", "01:57:31.97 77.68 percent complete\n", "01:57:53.28 77.92 percent complete\n", "01:58:14.92 78.16 percent complete\n", "01:58:35.38 78.40 percent complete\n", "01:58:56.91 78.63 percent complete\n", "01:59:19.15 78.87 percent complete\n", "01:59:40.13 79.11 percent complete\n", "01:59:59.84 79.35 percent complete\n" ], "name": "stdout" }, { "output_type": "stream", "text": [ "WARNING:root:Applied processor reduces input query to empty string, all comparisons will have score 0. [Query: '●']\n" ], "name": "stderr" }, { "output_type": "stream", "text": [ "02:00:20.28 79.58 percent complete\n", "02:00:40.97 79.82 percent complete\n", "02:01:02.45 80.06 percent complete\n", "02:01:23.50 80.30 percent complete\n", "02:01:44.88 80.53 percent complete\n" ], "name": "stdout" }, { "output_type": "stream", "text": [ "WARNING:root:Applied processor reduces input query to empty string, all comparisons will have score 0. [Query: '↓ ↓']\n" ], "name": "stderr" }, { "output_type": "stream", "text": [ "02:02:06.36 80.77 percent complete\n", "02:02:27.16 81.01 percent complete\n" ], "name": "stdout" }, { "output_type": "stream", "text": [ "WARNING:root:Applied processor reduces input query to empty string, all comparisons will have score 0. [Query: '․ ․ ․ ․ ․']\n" ], "name": "stderr" }, { "output_type": "stream", "text": [ "02:02:49.16 81.25 percent complete\n" ], "name": "stdout" }, { "output_type": "stream", "text": [ "WARNING:root:Applied processor reduces input query to empty string, all comparisons will have score 0. [Query: '⇧']\n" ], "name": "stderr" }, { "output_type": "stream", "text": [ "02:03:10.46 81.48 percent complete\n", "02:03:30.79 81.72 percent complete\n", "02:03:51.99 81.96 percent complete\n", "02:04:12.28 82.20 percent complete\n", "02:04:33.60 82.44 percent complete\n", "02:04:54.65 82.67 percent complete\n", "02:05:16.59 82.91 percent complete\n", "02:05:38.05 83.15 percent complete\n", "02:05:59.48 83.39 percent complete\n", "02:06:20.02 83.62 percent complete\n", "02:06:41.69 83.86 percent complete\n", "02:07:02.11 84.10 percent complete\n", "02:07:23.23 84.34 percent complete\n", "02:07:44.85 84.57 percent complete\n", "02:08:06.74 84.81 percent complete\n", "02:08:27.44 85.05 percent complete\n", "02:08:48.06 85.29 percent complete\n", "02:09:09.08 85.52 percent complete\n", "02:09:30.60 85.76 percent complete\n", "02:09:51.78 86.00 percent complete\n", "02:10:12.14 86.24 percent complete\n", "02:10:32.79 86.47 percent complete\n", "02:10:54.70 86.71 percent complete\n", "02:11:15.84 86.95 percent complete\n", "02:11:35.87 87.19 percent complete\n", "02:11:56.10 87.42 percent complete\n", "02:12:16.95 87.66 percent complete\n", "02:12:38.57 87.90 percent complete\n", "02:12:58.93 88.14 percent complete\n", "02:13:18.99 88.37 percent complete\n", "02:13:41.31 88.61 percent complete\n", "02:14:01.41 88.85 percent complete\n", "02:14:22.72 89.09 percent complete\n", "02:14:43.65 89.32 percent complete\n" ], "name": "stdout" }, { "output_type": "stream", "text": [ "WARNING:root:Applied processor reduces input query to empty string, all comparisons will have score 0. [Query: '→ →']\n" ], "name": "stderr" }, { "output_type": "stream", "text": [ "02:15:04.41 89.56 percent complete\n", "02:15:26.29 89.80 percent complete\n", "02:15:47.92 90.04 percent complete\n", "02:16:09.38 90.27 percent complete\n", "02:16:31.90 90.51 percent complete\n", "02:16:53.25 90.75 percent complete\n", "02:17:14.60 90.99 percent complete\n", "02:17:36.53 91.23 percent complete\n", "02:17:56.41 91.46 percent complete\n", "02:18:17.04 91.70 percent complete\n", "02:18:38.28 91.94 percent complete\n", "02:18:59.20 92.18 percent complete\n", "02:19:20.81 92.41 percent complete\n", "02:19:42.87 92.65 percent complete\n", "02:20:03.77 92.89 percent complete\n", "02:20:24.92 93.13 percent complete\n", "02:20:45.84 93.36 percent complete\n", "02:21:07.57 93.60 percent complete\n", "02:21:28.39 93.84 percent complete\n", "02:21:48.66 94.08 percent complete\n", "02:22:09.81 94.31 percent complete\n", "02:22:30.95 94.55 percent complete\n", "02:22:51.24 94.79 percent complete\n", "02:23:12.67 95.03 percent complete\n", "02:23:33.71 95.26 percent complete\n", "02:23:53.89 95.50 percent complete\n", "02:24:15.22 95.74 percent complete\n", "02:24:36.14 95.98 percent complete\n", "02:24:57.43 96.21 percent complete\n", "02:25:18.39 96.45 percent complete\n", "02:25:40.77 96.69 percent complete\n", "02:26:01.82 96.93 percent complete\n" ], "name": "stdout" }, { "output_type": "stream", "text": [ "WARNING:root:Applied processor reduces input query to empty string, all comparisons will have score 0. [Query: '․ ․ ․']\n" ], "name": "stderr" }, { "output_type": "stream", "text": [ "02:26:23.14 97.16 percent complete\n", "02:26:44.48 97.40 percent complete\n", "02:27:05.88 97.64 percent complete\n", "02:27:26.90 97.88 percent complete\n", "02:27:47.24 98.11 percent complete\n", "02:28:07.91 98.35 percent complete\n", "02:28:28.42 98.59 percent complete\n", "02:28:50.20 98.83 percent complete\n", "02:29:11.01 99.06 percent complete\n", "02:29:31.39 99.30 percent complete\n", "02:29:51.79 99.54 percent complete\n", "02:30:12.57 99.78 percent complete\n" ], "name": "stdout" } ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "hxxBOCA-xXhy", "outputId": "84f5c76d-fa6c-445a-f120-cf782de13db4", "colab": { "base_uri": "https://localhost:8080/", "height": 799 } }, "source": [ "#TODO: Skip for retrain\n", "# This section does the split between train/dev for the parallel corpora then saves them as separate files\n", "# We use 1000 dev test and the given test set.\n", "import csv\n", "\n", "# Do the split between dev/train and create parallel corpora\n", "num_dev_patterns = 1000\n", "\n", "# Optional: lower case the corpora - this will make it easier to generalize, but without proper casing.\n", "if lc: # Julia: making lowercasing optional\n", " df_pp[\"source_sentence\"] = df_pp[\"source_sentence\"].str.lower()\n", " df_pp[\"target_sentence\"] = df_pp[\"target_sentence\"].str.lower()\n", "\n", "# Julia: test sets are already generated\n", "dev = df_pp.tail(num_dev_patterns) # Herman: Error in original\n", "stripped = df_pp.drop(df_pp.tail(num_dev_patterns).index)\n", "\n", "with open(\"train.\"+source_language, \"w\") as src_file, open(\"train.\"+target_language, \"w\") as trg_file:\n", " for index, row in stripped.iterrows():\n", " src_file.write(row[\"source_sentence\"]+\"\\n\")\n", " trg_file.write(row[\"target_sentence\"]+\"\\n\")\n", " \n", "with open(\"dev.\"+source_language, \"w\") as src_file, open(\"dev.\"+target_language, \"w\") as trg_file:\n", " for index, row in dev.iterrows():\n", " src_file.write(row[\"source_sentence\"]+\"\\n\")\n", " trg_file.write(row[\"target_sentence\"]+\"\\n\")\n", "\n", "#stripped[[\"source_sentence\"]].to_csv(\"train.\"+source_language, header=False, index=False) # Herman: Added `header=False` everywhere\n", "#stripped[[\"target_sentence\"]].to_csv(\"train.\"+target_language, header=False, index=False) # Julia: Problematic handling of quotation marks.\n", "\n", "#dev[[\"source_sentence\"]].to_csv(\"dev.\"+source_language, header=False, index=False)\n", "#dev[[\"target_sentence\"]].to_csv(\"dev.\"+target_language, header=False, index=False)\n", "\n", "# Doublecheck the format below. There should be no extra quotation marks or weird characters.\n", "! head train.*\n", "! head dev.*" ], "execution_count": 0, "outputs": [ { "output_type": "stream", "text": [ "==> train.en <==\n", "It identifies us as Jesus ’ followers and as imitators of Jehovah , the Source of love .\n", "To live honestly is to lead a better life .\n", "Exports : Oil , cocoa , coffee , cotton , wood , aluminum\n", "Your reminders are what I am fond of . ”\n", "After a meal , the pancreas responds to increases in the glucose content of the blood , releasing the proper amount of insulin\n", "Jehovah invites people of all nations to draw close to him in prayer .\n", "4 : 18 - 22 .\n", "But when the Israelites were delivered from Egyptian bondage , the prophet Moses had Joseph’s bones taken along for burial in the Promised Land .\n", "The sea was getting rougher , and the fight to stay afloat made me very tired .\n", "Joyous and Thankful Despite Loss ( N .\n", "\n", "==> train.yo <==\n", "Òun ló ń jẹ́ káwọn èèyàn dá wa mọ̀ pé ọmọlẹ́yìn Jésù ni wá àti pé à ń fara wé Jèhófà , Ọlọ́run ìfẹ́ .\n", "Sísọ bá a ṣe jẹ́ gan - an máa ń jẹ́ kí ìgbésí ayé èèyàn dára .\n", "Ohun Àmúṣọrọ̀ : Epo rọ̀bì , kòkó , kọfí , òwú , igi àti tángaran\n", "Àwọn ìránnilétí rẹ ni mo ní ìfẹ́ni fún . ”\n", "Ẹni Tí Ara Rẹ̀ Le Ẹni Tó Ní Àrùn Ẹni Tó Ní Àrùn Àtọ̀gbẹ Oríṣi Kìíní Àtọ̀gbẹ Oríṣi Kejì\n", "Jèhófà fẹ́ kí gbogbo èèyàn orílẹ̀ - èdè sún mọ́ òun nípasẹ̀ àdúrà .\n", "4 : 18 - 22 .\n", "Àmọ́ nígbà táwọn ọmọ Ísírẹ́lì gba òmìnira kúrò lọ́wọ́ àwọn ará Íjíbítì , wòlíì Mósè ní kí wọ́n kó àwọn egungun Jósẹ́fù dání kí wọ́n lè sin ín sí Ilẹ̀ Ìlérí .\n", "Ìrugùdù òkun náà ń le sí i , àárẹ̀ sì ti mú mi gan - an bí mo ṣe ń sa gbogbo ipá mi kí n má bàa rì lọ sísàlẹ̀ .\n", "Òṣùṣù Ọwọ̀ Ni Wá ( M .\n", "==> dev.en <==\n", "On his first missionary tour , he started from Antioch , where Jesus ’ followers were first called Christians .\n", "I am alive today because of applying Bible principles\n", "That was the first resurrection of Bible record .\n", "Like the guide in our illustration , Jehovah kindly extends his helping hand and his friendship to those who seek to walk with him .\n", "18 Should the Name Jehovah Appear in the New Testament ?\n", "Many of our modern - day fellow believers have similarly demonstrated trust in Jehovah and have taken appropriate action .\n", "Continue in the Spiritual Paradise\n", "What can we learn about clothing from God’s Law to the Israelites ?\n", "Additionally , there have been extensive changes in the grammar and syntax of the language .\n", "Rather , we need to cultivate strong trust in Jehovah while taking whatever appropriate action we can .\n", "\n", "==> dev.yo <==\n", "Ìlú Áńtíókù , tá a ti kọ́kọ́ pe àwọn ọmọ ẹ̀yìn Jésù ní Kristẹni , ni Pọ́ọ̀lù ti bẹ̀rẹ̀ ìrìn àjò míṣọ́nnárì rẹ̀ àkọ́kọ́ .\n", "Torí pé mò ń tẹ̀ lé àwọn ìlànà Bíbélì ló jẹ́ kí n wà láàyè títí dòní\n", "Àjíǹde àkọ́kọ́ tí Bíbélì mẹ́nu kàn nìyẹn .\n", "Bíi ti afinimọ̀nà tá a mẹ́nu kàn nínú àpèjúwe wa yẹn , Jèhófà sọ pé òun máa ran àwọn tó bá fẹ́ bá òun rìn lọ́wọ́ , ó sì ní kí wọ́n wá bá òun dọ́rẹ̀ẹ́ .\n", "18 Ṣé Ó Yẹ Kí Orúkọ Náà Jèhófà Wà Nínú Májẹ̀mú Tuntun ?\n", "Ọ̀pọ̀ àwọn ará wa tó jẹ́ olóòótọ́ lóde òní ló ti fi hàn pé àwọn gbẹ́kẹ̀ lé Jèhófà , tí wọ́n sì tún gbé ìgbésẹ̀ tó yẹ .\n", "Má Ṣe Kúrò Nínú Párádísè Tẹ̀mí\n", "Kí la rí kọ́ nínú Òfin tí Ọlọ́run fún àwọn ọmọ Ísírẹ́lì nípa ìmúra ?\n", "Yàtọ̀ síyẹn , àtúnṣe kékeré kọ́ ni wọ́n ti ṣe sí gírámà àti ọ̀nà tí wọ́n ń gbà ṣètò ọ̀rọ̀ nínú àwọn èdè yìí .\n", "Kàkà bẹ́ẹ̀ , a gbọ́dọ̀ gbẹ́kẹ̀ lé Jèhófà , ká sì gbé àwọn ìgbésẹ̀ tó yẹ .\n" ], "name": "stdout" } ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "epeCydmCyS8X" }, "source": [ "\n", "\n", "---\n", "\n", "\n", "## Installation of JoeyNMT\n", "\n", "JoeyNMT is a simple, minimalist NMT package which is useful for learning and teaching. Check out the documentation for JoeyNMT [here](https://joeynmt.readthedocs.io) " ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "iBRMm4kMxZ8L", "outputId": "0bea3507-19fd-454c-ff7d-8b56f260a21c", "colab": { "base_uri": "https://localhost:8080/", "height": 1000 } }, "source": [ "# Install JoeyNMT\n", "! git clone https://github.com/joeynmt/joeynmt.git\n", "! cd joeynmt; pip3 install ." ], "execution_count": 3, "outputs": [ { "output_type": "stream", "text": [ "Cloning into 'joeynmt'...\n", "remote: Enumerating objects: 3, done.\u001b[K\n", "remote: Counting objects: 33% (1/3)\u001b[K\rremote: Counting objects: 66% (2/3)\u001b[K\rremote: Counting objects: 100% (3/3)\u001b[K\rremote: Counting objects: 100% (3/3), done.\u001b[K\n", "remote: Compressing objects: 33% (1/3)\u001b[K\rremote: Compressing objects: 66% (2/3)\u001b[K\rremote: Compressing objects: 100% (3/3)\u001b[K\rremote: Compressing objects: 100% (3/3), done.\u001b[K\n", "Receiving objects: 0% (1/2380) \rReceiving objects: 1% (24/2380) \rReceiving objects: 2% (48/2380) \rReceiving objects: 3% (72/2380) \rReceiving objects: 4% (96/2380) \rReceiving objects: 5% (119/2380) \rReceiving objects: 6% (143/2380) \rReceiving objects: 7% (167/2380) \rReceiving objects: 8% (191/2380) \rReceiving objects: 9% (215/2380) \rReceiving objects: 10% (238/2380) \rReceiving objects: 11% (262/2380) \rReceiving objects: 12% (286/2380) \rReceiving objects: 13% (310/2380) \rReceiving objects: 14% (334/2380) \rReceiving objects: 15% (357/2380) \rReceiving objects: 16% (381/2380) \rReceiving objects: 17% (405/2380) \rReceiving objects: 18% (429/2380) \rReceiving objects: 19% (453/2380) \rReceiving objects: 20% (476/2380) \rReceiving objects: 21% (500/2380) \rReceiving objects: 22% (524/2380) \rReceiving objects: 23% (548/2380) \rReceiving objects: 24% (572/2380) \rReceiving objects: 25% (595/2380) \rReceiving objects: 26% (619/2380) \rReceiving objects: 27% (643/2380) \rReceiving objects: 28% (667/2380) \rReceiving objects: 29% (691/2380) \rReceiving objects: 30% (714/2380) \rReceiving objects: 31% (738/2380) \rReceiving objects: 32% (762/2380) \rReceiving objects: 33% (786/2380) \rReceiving objects: 34% (810/2380) \rReceiving objects: 35% (833/2380) \rReceiving objects: 36% (857/2380) \rReceiving objects: 37% (881/2380) \rReceiving objects: 38% (905/2380) \rReceiving objects: 39% (929/2380) \rReceiving objects: 40% (952/2380) \rReceiving objects: 41% (976/2380) \rReceiving objects: 42% (1000/2380) \rReceiving objects: 43% (1024/2380) \rReceiving objects: 44% (1048/2380) \rReceiving objects: 45% (1071/2380) \rReceiving objects: 46% (1095/2380) \rReceiving objects: 47% (1119/2380) \rReceiving objects: 48% (1143/2380) \rReceiving objects: 49% (1167/2380) \rReceiving objects: 50% (1190/2380) \rReceiving objects: 51% (1214/2380) \rReceiving objects: 52% (1238/2380) \rReceiving objects: 53% (1262/2380) \rReceiving objects: 54% (1286/2380) \rReceiving objects: 55% (1309/2380) \rReceiving objects: 56% (1333/2380) \rReceiving objects: 57% (1357/2380) \rReceiving objects: 58% (1381/2380) \rReceiving objects: 59% (1405/2380) \rReceiving objects: 60% (1428/2380) \rReceiving objects: 61% (1452/2380) \rReceiving objects: 62% (1476/2380) \rReceiving objects: 63% (1500/2380) \rReceiving objects: 64% (1524/2380) \rReceiving objects: 65% (1547/2380) \rReceiving objects: 66% (1571/2380) \rReceiving objects: 67% (1595/2380) \rReceiving objects: 68% (1619/2380) \rReceiving objects: 69% (1643/2380) \rReceiving objects: 70% (1666/2380) \rReceiving objects: 71% (1690/2380) \rReceiving objects: 72% (1714/2380) \rReceiving objects: 73% (1738/2380) \rReceiving objects: 74% (1762/2380) \rReceiving objects: 75% (1785/2380) \rReceiving objects: 76% (1809/2380) \rReceiving objects: 77% (1833/2380) \rReceiving objects: 78% (1857/2380) \rReceiving objects: 79% (1881/2380) \rReceiving objects: 80% (1904/2380) \rReceiving objects: 81% (1928/2380) \rReceiving objects: 82% (1952/2380) \rReceiving objects: 83% (1976/2380) \rReceiving objects: 84% (2000/2380) \rReceiving objects: 85% (2023/2380) \rReceiving objects: 86% (2047/2380) \rReceiving objects: 87% (2071/2380) \rReceiving objects: 88% (2095/2380) \rReceiving objects: 89% (2119/2380) \rremote: Total 2380 (delta 0), reused 0 (delta 0), pack-reused 2377\u001b[K\n", "Receiving objects: 90% (2142/2380) \rReceiving objects: 91% (2166/2380) \rReceiving objects: 92% (2190/2380) \rReceiving objects: 93% (2214/2380) \rReceiving objects: 94% (2238/2380) \rReceiving objects: 95% (2261/2380) \rReceiving objects: 96% (2285/2380) \rReceiving objects: 97% (2309/2380) \rReceiving objects: 98% (2333/2380) \rReceiving objects: 99% (2357/2380) \rReceiving objects: 100% (2380/2380) \rReceiving objects: 100% (2380/2380), 2.60 MiB | 13.71 MiB/s, done.\n", "Resolving deltas: 0% (0/1670) \rResolving deltas: 1% (19/1670) \rResolving deltas: 2% (41/1670) \rResolving deltas: 5% (92/1670) \rResolving deltas: 6% (105/1670) \rResolving deltas: 7% (120/1670) \rResolving deltas: 9% (166/1670) \rResolving deltas: 10% (168/1670) \rResolving deltas: 11% (192/1670) \rResolving deltas: 12% (202/1670) \rResolving deltas: 13% (219/1670) \rResolving deltas: 14% (234/1670) \rResolving deltas: 15% (255/1670) \rResolving deltas: 16% (268/1670) \rResolving deltas: 17% (285/1670) \rResolving deltas: 18% (312/1670) \rResolving deltas: 19% (321/1670) \rResolving deltas: 21% (352/1670) \rResolving deltas: 23% (400/1670) \rResolving deltas: 25% (418/1670) \rResolving deltas: 26% (441/1670) \rResolving deltas: 29% (493/1670) \rResolving deltas: 31% (534/1670) \rResolving deltas: 32% (535/1670) \rResolving deltas: 33% (560/1670) \rResolving deltas: 34% (568/1670) \rResolving deltas: 35% (597/1670) \rResolving deltas: 37% (631/1670) \rResolving deltas: 38% (646/1670) \rResolving deltas: 40% (683/1670) \rResolving deltas: 44% (741/1670) \rResolving deltas: 46% (769/1670) \rResolving deltas: 68% (1147/1670) \rResolving deltas: 74% (1237/1670) \rResolving deltas: 75% (1255/1670) \rResolving deltas: 76% (1272/1670) \rResolving deltas: 78% (1315/1670) \rResolving deltas: 79% (1321/1670) \rResolving deltas: 81% (1357/1670) \rResolving deltas: 83% (1394/1670) \rResolving deltas: 84% (1404/1670) \rResolving deltas: 85% (1423/1670) \rResolving deltas: 86% (1443/1670) \rResolving deltas: 87% (1454/1670) \rResolving deltas: 88% (1470/1670) \rResolving deltas: 89% (1492/1670) \rResolving deltas: 90% (1509/1670) \rResolving deltas: 91% (1524/1670) \rResolving deltas: 97% (1630/1670) \rResolving deltas: 99% (1658/1670) \rResolving deltas: 100% (1670/1670) \rResolving deltas: 100% (1670/1670), done.\n", "Processing /content/joeynmt\n", "Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from joeynmt==0.0.1) (0.16.0)\n", "Requirement already satisfied: pillow in /usr/local/lib/python3.6/dist-packages (from joeynmt==0.0.1) (7.0.0)\n", "Requirement already satisfied: numpy<2.0,>=1.14.5 in /usr/local/lib/python3.6/dist-packages (from joeynmt==0.0.1) (1.18.2)\n", "Requirement already satisfied: setuptools>=41.0.0 in /usr/local/lib/python3.6/dist-packages (from joeynmt==0.0.1) (46.1.3)\n", "Requirement already satisfied: torch>=1.1 in /usr/local/lib/python3.6/dist-packages (from joeynmt==0.0.1) (1.4.0)\n", "Requirement already satisfied: tensorflow>=1.14 in /usr/local/lib/python3.6/dist-packages (from joeynmt==0.0.1) (2.2.0rc2)\n", "Requirement already satisfied: torchtext in /usr/local/lib/python3.6/dist-packages (from joeynmt==0.0.1) (0.3.1)\n", "Collecting sacrebleu>=1.3.6\n", "\u001b[?25l Downloading https://files.pythonhosted.org/packages/f5/58/5c6cc352ea6271125325950715cf8b59b77abe5e93cf29f6e60b491a31d9/sacrebleu-1.4.6-py3-none-any.whl (59kB)\n", "\u001b[K |████████████████████████████████| 61kB 2.2MB/s \n", "\u001b[?25hCollecting subword-nmt\n", " Downloading https://files.pythonhosted.org/packages/74/60/6600a7bc09e7ab38bc53a48a20d8cae49b837f93f5842a41fe513a694912/subword_nmt-0.3.7-py2.py3-none-any.whl\n", "Requirement already satisfied: matplotlib in /usr/local/lib/python3.6/dist-packages (from joeynmt==0.0.1) (3.2.1)\n", "Requirement already satisfied: seaborn in /usr/local/lib/python3.6/dist-packages (from joeynmt==0.0.1) (0.10.0)\n", "Collecting pyyaml>=5.1\n", "\u001b[?25l Downloading https://files.pythonhosted.org/packages/64/c2/b80047c7ac2478f9501676c988a5411ed5572f35d1beff9cae07d321512c/PyYAML-5.3.1.tar.gz (269kB)\n", "\u001b[K |████████████████████████████████| 276kB 8.9MB/s \n", "\u001b[?25hCollecting pylint\n", "\u001b[?25l Downloading https://files.pythonhosted.org/packages/e9/59/43fc36c5ee316bb9aeb7cf5329cdbdca89e5749c34d5602753827c0aa2dc/pylint-2.4.4-py3-none-any.whl (302kB)\n", "\u001b[K |████████████████████████████████| 307kB 15.4MB/s \n", "\u001b[?25hRequirement already satisfied: six==1.12 in /usr/local/lib/python3.6/dist-packages (from joeynmt==0.0.1) (1.12.0)\n", "Collecting wrapt==1.11.1\n", " Downloading https://files.pythonhosted.org/packages/67/b2/0f71ca90b0ade7fad27e3d20327c996c6252a2ffe88f50a95bba7434eda9/wrapt-1.11.1.tar.gz\n", "Requirement already satisfied: opt-einsum>=2.3.2 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14->joeynmt==0.0.1) (3.2.0)\n", "Requirement already satisfied: absl-py>=0.7.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14->joeynmt==0.0.1) (0.9.0)\n", "Requirement already satisfied: h5py<2.11.0,>=2.10.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14->joeynmt==0.0.1) (2.10.0)\n", "Requirement already satisfied: tensorflow-estimator<2.3.0,>=2.2.0rc0 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14->joeynmt==0.0.1) (2.2.0rc0)\n", "Requirement already satisfied: tensorboard<2.3.0,>=2.2.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14->joeynmt==0.0.1) (2.2.0)\n", "Requirement already satisfied: protobuf>=3.8.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14->joeynmt==0.0.1) (3.10.0)\n", "Requirement already satisfied: gast==0.3.3 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14->joeynmt==0.0.1) (0.3.3)\n", "Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14->joeynmt==0.0.1) (1.1.0)\n", "Requirement already satisfied: google-pasta>=0.1.8 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14->joeynmt==0.0.1) (0.2.0)\n", "Requirement already satisfied: grpcio>=1.8.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14->joeynmt==0.0.1) (1.28.1)\n", "Requirement already satisfied: scipy==1.4.1; python_version >= \"3\" in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14->joeynmt==0.0.1) (1.4.1)\n", "Requirement already satisfied: astunparse==1.6.3 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14->joeynmt==0.0.1) (1.6.3)\n", "Requirement already satisfied: keras-preprocessing>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14->joeynmt==0.0.1) (1.1.0)\n", "Requirement already satisfied: wheel>=0.26; python_version >= \"3\" in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14->joeynmt==0.0.1) (0.34.2)\n", "Requirement already satisfied: requests in /usr/local/lib/python3.6/dist-packages (from torchtext->joeynmt==0.0.1) (2.21.0)\n", "Requirement already satisfied: tqdm in /usr/local/lib/python3.6/dist-packages (from torchtext->joeynmt==0.0.1) (4.38.0)\n", "Collecting mecab-python3\n", "\u001b[?25l Downloading https://files.pythonhosted.org/packages/18/49/b55a839a77189042960bf96490640c44816073f917d489acbc5d79fa5cc3/mecab_python3-0.996.5-cp36-cp36m-manylinux2010_x86_64.whl (17.1MB)\n", "\u001b[K |████████████████████████████████| 17.1MB 198kB/s \n", "\u001b[?25hCollecting portalocker\n", " Downloading https://files.pythonhosted.org/packages/64/03/9abfb3374d67838daf24f1a388528714bec1debb1d13749f0abd7fb07cfb/portalocker-1.6.0-py2.py3-none-any.whl\n", "Requirement already satisfied: typing in /usr/local/lib/python3.6/dist-packages (from sacrebleu>=1.3.6->joeynmt==0.0.1) (3.6.6)\n", "Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib->joeynmt==0.0.1) (0.10.0)\n", "Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->joeynmt==0.0.1) (2.8.1)\n", "Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->joeynmt==0.0.1) (1.2.0)\n", "Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->joeynmt==0.0.1) (2.4.7)\n", "Requirement already satisfied: pandas>=0.22.0 in /usr/local/lib/python3.6/dist-packages (from seaborn->joeynmt==0.0.1) (1.0.3)\n", "Collecting mccabe<0.7,>=0.6\n", " Downloading https://files.pythonhosted.org/packages/87/89/479dc97e18549e21354893e4ee4ef36db1d237534982482c3681ee6e7b57/mccabe-0.6.1-py2.py3-none-any.whl\n", "Collecting astroid<2.4,>=2.3.0\n", "\u001b[?25l Downloading https://files.pythonhosted.org/packages/ad/ae/86734823047962e7b8c8529186a1ac4a7ca19aaf1aa0c7713c022ef593fd/astroid-2.3.3-py3-none-any.whl (205kB)\n", "\u001b[K |████████████████████████████████| 215kB 54.7MB/s \n", "\u001b[?25hCollecting isort<5,>=4.2.5\n", "\u001b[?25l Downloading https://files.pythonhosted.org/packages/e5/b0/c121fd1fa3419ea9bfd55c7f9c4fedfec5143208d8c7ad3ce3db6c623c21/isort-4.3.21-py2.py3-none-any.whl (42kB)\n", "\u001b[K |████████████████████████████████| 51kB 8.4MB/s \n", "\u001b[?25hRequirement already satisfied: google-auth<2,>=1.6.3 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow>=1.14->joeynmt==0.0.1) (1.7.2)\n", "Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow>=1.14->joeynmt==0.0.1) (1.6.0.post3)\n", "Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow>=1.14->joeynmt==0.0.1) (0.4.1)\n", "Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow>=1.14->joeynmt==0.0.1) (1.0.1)\n", "Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow>=1.14->joeynmt==0.0.1) (3.2.1)\n", "Requirement already satisfied: urllib3<1.25,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests->torchtext->joeynmt==0.0.1) (1.24.3)\n", "Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests->torchtext->joeynmt==0.0.1) (2020.4.5.1)\n", "Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests->torchtext->joeynmt==0.0.1) (3.0.4)\n", "Requirement already satisfied: idna<2.9,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests->torchtext->joeynmt==0.0.1) (2.8)\n", "Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.22.0->seaborn->joeynmt==0.0.1) (2018.9)\n", "Collecting typed-ast<1.5,>=1.4.0; implementation_name == \"cpython\" and python_version < \"3.8\"\n", "\u001b[?25l Downloading https://files.pythonhosted.org/packages/90/ed/5459080d95eb87a02fe860d447197be63b6e2b5e9ff73c2b0a85622994f4/typed_ast-1.4.1-cp36-cp36m-manylinux1_x86_64.whl (737kB)\n", "\u001b[K |████████████████████████████████| 747kB 54.8MB/s \n", "\u001b[?25hCollecting lazy-object-proxy==1.4.*\n", "\u001b[?25l Downloading https://files.pythonhosted.org/packages/0b/dd/b1e3407e9e6913cf178e506cd0dee818e58694d9a5cd1984e3f6a8b9a10f/lazy_object_proxy-1.4.3-cp36-cp36m-manylinux1_x86_64.whl (55kB)\n", "\u001b[K |████████████████████████████████| 61kB 9.8MB/s \n", "\u001b[?25hRequirement already satisfied: cachetools<3.2,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard<2.3.0,>=2.2.0->tensorflow>=1.14->joeynmt==0.0.1) (3.1.1)\n", "Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard<2.3.0,>=2.2.0->tensorflow>=1.14->joeynmt==0.0.1) (0.2.8)\n", "Requirement already satisfied: rsa<4.1,>=3.1.4 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard<2.3.0,>=2.2.0->tensorflow>=1.14->joeynmt==0.0.1) (4.0)\n", "Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.6/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.3.0,>=2.2.0->tensorflow>=1.14->joeynmt==0.0.1) (1.3.0)\n", "Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.6/dist-packages (from pyasn1-modules>=0.2.1->google-auth<2,>=1.6.3->tensorboard<2.3.0,>=2.2.0->tensorflow>=1.14->joeynmt==0.0.1) (0.4.8)\n", "Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.6/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.3.0,>=2.2.0->tensorflow>=1.14->joeynmt==0.0.1) (3.1.0)\n", "Building wheels for collected packages: joeynmt, pyyaml, wrapt\n", " Building wheel for joeynmt (setup.py) ... \u001b[?25l\u001b[?25hdone\n", " Created wheel for joeynmt: filename=joeynmt-0.0.1-cp36-none-any.whl size=73768 sha256=8f0158e4be23430ea152ac5f4ce11767926f7418820f2e8bc8242188ce72e051\n", " Stored in directory: /tmp/pip-ephem-wheel-cache-aufr9ko9/wheels/db/01/db/751cc9f3e7f6faec127c43644ba250a3ea7ad200594aeda70a\n", " Building wheel for pyyaml (setup.py) ... \u001b[?25l\u001b[?25hdone\n", " Created wheel for pyyaml: filename=PyYAML-5.3.1-cp36-cp36m-linux_x86_64.whl size=44621 sha256=4785fab2d698abe3c02a8a790464fe91007e24854e9eb877b89aec6183583f20\n", " Stored in directory: /root/.cache/pip/wheels/a7/c1/ea/cf5bd31012e735dc1dfea3131a2d5eae7978b251083d6247bd\n", " Building wheel for wrapt (setup.py) ... \u001b[?25l\u001b[?25hdone\n", " Created wheel for wrapt: filename=wrapt-1.11.1-cp36-cp36m-linux_x86_64.whl size=67442 sha256=99b97592a028f816d94a1bfd4513fc86e3bf1543897137ccd952b644e36780bf\n", " Stored in directory: /root/.cache/pip/wheels/89/67/41/63cbf0f6ac0a6156588b9587be4db5565f8c6d8ccef98202fc\n", "Successfully built joeynmt pyyaml wrapt\n", "Installing collected packages: mecab-python3, portalocker, sacrebleu, subword-nmt, pyyaml, mccabe, typed-ast, lazy-object-proxy, wrapt, astroid, isort, pylint, joeynmt\n", " Found existing installation: PyYAML 3.13\n", " Uninstalling PyYAML-3.13:\n", " Successfully uninstalled PyYAML-3.13\n", " Found existing installation: wrapt 1.12.1\n", " Uninstalling wrapt-1.12.1:\n", " Successfully uninstalled wrapt-1.12.1\n", "Successfully installed astroid-2.3.3 isort-4.3.21 joeynmt-0.0.1 lazy-object-proxy-1.4.3 mccabe-0.6.1 mecab-python3-0.996.5 portalocker-1.6.0 pylint-2.4.4 pyyaml-5.3.1 sacrebleu-1.4.6 subword-nmt-0.3.7 typed-ast-1.4.1 wrapt-1.11.1\n" ], "name": "stdout" } ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "AaE77Tcppex9" }, "source": [ "# Preprocessing the Data into Subword BPE Tokens\n", "\n", "- One of the most powerful improvements for agglutinative languages (a feature of most Bantu languages) is using BPE tokenization [ (Sennrich, 2015) ](https://arxiv.org/abs/1508.07909).\n", "\n", "- It was also shown that by optimizing the umber of BPE codes we significantly improve results for low-resourced languages [(Sennrich, 2019)](https://www.aclweb.org/anthology/P19-1021) [(Martinus, 2019)](https://arxiv.org/abs/1906.05685)\n", "\n", "- Below we have the scripts for doing BPE tokenization of our data. We use 4000 tokens as recommended by [(Sennrich, 2019)](https://www.aclweb.org/anthology/P19-1021). You do not need to change anything. Simply running the below will be suitable. " ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "H-TyjtmXB1mL", "outputId": "b25e361c-a078-4d69-ab24-ce22a7e9d940", "colab": { "base_uri": "https://localhost:8080/", "height": 408 } }, "source": [ "#TODO: Skip for retrain\n", "# One of the huge boosts in NMT performance was to use a different method of tokenizing. \n", "# Usually, NMT would tokenize by words. However, using a method called BPE gave amazing boosts to performance\n", "\n", "# Do subword NMT\n", "from os import path\n", "os.environ[\"src\"] = source_language # Sets them in bash as well, since we often use bash scripts\n", "os.environ[\"tgt\"] = target_language\n", "\n", "# Learn BPEs on the training data.\n", "os.environ[\"data_path\"] = path.join(\"joeynmt\", \"data\", source_language + target_language) # Herman! \n", "! subword-nmt learn-joint-bpe-and-vocab --input train.$src train.$tgt -s 4000 -o bpe.codes.4000 --write-vocabulary vocab.$src vocab.$tgt\n", "\n", "# Apply BPE splits to the development and test data.\n", "! subword-nmt apply-bpe -c bpe.codes.4000 --vocabulary vocab.$src < train.$src > train.bpe.$src\n", "! subword-nmt apply-bpe -c bpe.codes.4000 --vocabulary vocab.$tgt < train.$tgt > train.bpe.$tgt\n", "\n", "! subword-nmt apply-bpe -c bpe.codes.4000 --vocabulary vocab.$src < dev.$src > dev.bpe.$src\n", "! subword-nmt apply-bpe -c bpe.codes.4000 --vocabulary vocab.$tgt < dev.$tgt > dev.bpe.$tgt\n", "! subword-nmt apply-bpe -c bpe.codes.4000 --vocabulary vocab.$src < test.$src > test.bpe.$src\n", "! subword-nmt apply-bpe -c bpe.codes.4000 --vocabulary vocab.$tgt < test.$tgt > test.bpe.$tgt\n", "\n", "# Create directory, move everyone we care about to the correct location\n", "! mkdir -p $data_path\n", "! cp train.* $data_path\n", "! cp test.* $data_path\n", "! cp dev.* $data_path\n", "! cp bpe.codes.4000 $data_path\n", "! ls $data_path\n", "\n", "# Also move everything we care about to a mounted location in google drive (relevant if running in colab) at gdrive_path\n", "! cp train.* \"$gdrive_path\"\n", "! cp test.* \"$gdrive_path\"\n", "! cp dev.* \"$gdrive_path\"\n", "! cp bpe.codes.4000 \"$gdrive_path\"\n", "! ls \"$gdrive_path\"\n", "\n", "# Create that vocab using build_vocab\n", "! sudo chmod 777 joeynmt/scripts/build_vocab.py\n", "! joeynmt/scripts/build_vocab.py joeynmt/data/$src$tgt/train.bpe.$src joeynmt/data/$src$tgt/train.bpe.$tgt --output_path \"$gdrive_path/vocab.txt\"\n", "\n", "# Some output\n", "! echo \"BPE Xhosa Sentences\"\n", "! tail -n 5 test.bpe.$tgt\n", "! echo \"Combined BPE Vocab\"\n", "! tail -n 10 \"$gdrive_path/vocab.txt\" # Herman" ], "execution_count": 0, "outputs": [ { "output_type": "stream", "text": [ "bpe.codes.4000\tdev.en\t test.bpe.yo test.yo\t train.en\n", "dev.bpe.en\tdev.yo\t test.en\t train.bpe.en train.yo\n", "dev.bpe.yo\ttest.bpe.en test.en-any.en train.bpe.yo\n", "bpe.codes.4000\tdev.en\ttest.bpe.en test.en-any.en train.bpe.yo\n", "dev.bpe.en\tdev.yo\ttest.bpe.yo test.yo\t train.en\n", "dev.bpe.yo\tmodels\ttest.en train.bpe.en train.yo\n", "BPE Xhosa Sentences\n", "The large shi@@ eld of faith ( See par@@ ag@@ r@@ ap@@ h@@ s 12 - 14 )\n", "The hel@@ me@@ t of sal@@ vation ( See par@@ ag@@ r@@ ap@@ h@@ s 15 - 18 )\n", "I’@@ ve found that people respon@@ d well when they see that you are pas@@ sion@@ ate about the Bible and are doing your best to help them . ”\n", "The s@@ word of the spirit ( See par@@ ag@@ r@@ ap@@ h@@ s 19 - 20 )\n", "With Jehovah’s help , we can stand fir@@ m against him !\n", "Combined BPE Vocab\n", "ỵ\n", "bítì\n", "ẹ̀rún\n", "Ísír@@\n", "×\n", "ạ@@\n", "Íjí@@\n", "ẽ@@\n", "Gẹ̀ẹ́@@\n", "ş\n" ], "name": "stdout" } ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "Ixmzi60WsUZ8" }, "source": [ "# Creating the JoeyNMT Config\n", "\n", "JoeyNMT requires a yaml config. We provide a template below. We've also set a number of defaults with it, that you may play with!\n", "\n", "- We used Transformer architecture \n", "- We set our dropout to reasonably high: 0.3 (recommended in [(Sennrich, 2019)](https://www.aclweb.org/anthology/P19-1021))\n", "\n", "Things worth playing with:\n", "- The batch size (also recommended to change for low-resourced languages)\n", "- The number of epochs (we've set it at 30 just so it runs in about an hour, for testing purposes)\n", "- The decoder options (beam_size, alpha)\n", "- Evaluation metrics (BLEU versus Crhf4)" ] }, { "cell_type": "code", "metadata": { "id": "Wc47fvWqyxbd", "colab_type": "code", "colab": {} }, "source": [ "def get_last_checkpoint(directory):\n", " last_checkpoint = ''\n", " try:\n", " for filename in os.listdir(directory):\n", " if 'best' in filename and filename.endswith(\".ckpt\"):\n", " return filename\n", " if not 'best' in filename and filename.endswith(\".ckpt\"):\n", " if not last_checkpoint or int(filename.split('.')[0]) > int(last_checkpoint.split('.')[0]):\n", " last_checkpoint = filename\n", " except FileNotFoundError as e:\n", " print('Error Occur ', e)\n", " return last_checkpoint" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "x_ffEoFdy1Qo", "colab_type": "code", "outputId": "dedf65b3-e752-4c69-ad3a-ded9f99b9b7c", "colab": { "base_uri": "https://localhost:8080/", "height": 35 } }, "source": [ "# Copy the created models from the temporary storage to main storage on google drive for persistant storage \n", "# the content of te folder will be overwrite when you start trainin\n", "!cp -r \"/content/drive/My Drive/masakhane/model-temp/\"* \"$gdrive_path/models/${src}${tgt}_transformer/\"\n", "last_checkpoint = get_last_checkpoint(models_path)\n", "print('Last checkpoint :',last_checkpoint)" ], "execution_count": 7, "outputs": [ { "output_type": "stream", "text": [ "Last checkpoint : best.ckpt\n" ], "name": "stdout" } ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "PIs1lY2hxMsl", "colab": {} }, "source": [ "# This creates the config file for our JoeyNMT system. It might seem overwhelming so we've provided a couple of useful parameters you'll need to update\n", "# (You can of course play with all the parameters if you'd like!)\n", "\n", "name = '%s%s' % (source_language, target_language)\n", "gdrive_path = os.environ[\"gdrive_path\"]\n", "\n", "# Create the config\n", "config = \"\"\"\n", "name: \"{name}_transformer\"\n", "\n", "data:\n", " src: \"{source_language}\"\n", " trg: \"{target_language}\"\n", " train: \"{gdrive_path}/train.bpe\"\n", " dev: \"{gdrive_path}/dev.bpe\"\n", " test: \"{gdrive_path}/test.bpe\"\n", " level: \"bpe\"\n", " lowercase: False\n", " max_sent_length: 100\n", " src_vocab: \"{gdrive_path}/vocab.txt\"\n", " trg_vocab: \"{gdrive_path}/vocab.txt\"\n", "\n", "testing:\n", " beam_size: 5\n", " alpha: 1.0\n", "\n", "training:\n", " load_model: \"{gdrive_path}/models/{name}_transformer/{last_checkpoint}\" # uncommented to load a pre-trained model from last checkpoint\n", " random_seed: 42\n", " optimizer: \"adam\"\n", " normalization: \"tokens\"\n", " adam_betas: [0.9, 0.999] \n", " scheduling: \"plateau\" # TODO: try switching from plateau to Noam scheduling\n", " patience: 5 # For plateau: decrease learning rate by decrease_factor if validation score has not improved for this many validation rounds.\n", " learning_rate_factor: 0.5 # factor for Noam scheduler (used with Transformer)\n", " learning_rate_warmup: 1000 # warmup steps for Noam scheduler (used with Transformer)\n", " decrease_factor: 0.7\n", " loss: \"crossentropy\"\n", " learning_rate: 0.0003\n", " learning_rate_min: 0.00000001\n", " weight_decay: 0.0\n", " label_smoothing: 0.1\n", " batch_size: 4096\n", " batch_type: \"token\"\n", " eval_batch_size: 3600\n", " eval_batch_type: \"token\"\n", " batch_multiplier: 1\n", " early_stopping_metric: \"ppl\"\n", " epochs: 3 # TODO: Decrease for when playing around and checking of working. Around 30 is sufficient to check if its working at all\n", " validation_freq: 1000 # TODO: Set to at least once per epoch.\n", " logging_freq: 100\n", " eval_metric: \"bleu\"\n", " model_dir: \"{model_temp_dir}\"\n", " overwrite: True # TODO: Set to True if you want to overwrite possibly existing models. \n", " shuffle: True\n", " use_cuda: True\n", " max_output_length: 100\n", " print_valid_sents: [0, 1, 2, 3]\n", " keep_last_ckpts: 3\n", "\n", "model:\n", " initializer: \"xavier\"\n", " bias_initializer: \"zeros\"\n", " init_gain: 1.0\n", " embed_initializer: \"xavier\"\n", " embed_init_gain: 1.0\n", " tied_embeddings: True\n", " tied_softmax: True\n", " encoder:\n", " type: \"transformer\"\n", " num_layers: 6\n", " num_heads: 4 # TODO: Increase to 8 for larger data.\n", " embeddings:\n", " embedding_dim: 256 # TODO: Increase to 512 for larger data.\n", " scale: True\n", " dropout: 0.2\n", " # typically ff_size = 4 x hidden_size\n", " hidden_size: 256 # TODO: Increase to 512 for larger data.\n", " ff_size: 1024 # TODO: Increase to 2048 for larger data.\n", " dropout: 0.3\n", " decoder:\n", " type: \"transformer\"\n", " num_layers: 6\n", " num_heads: 4 # TODO: Increase to 8 for larger data.\n", " embeddings:\n", " embedding_dim: 256 # TODO: Increase to 512 for larger data.\n", " scale: True\n", " dropout: 0.2\n", " # typically ff_size = 4 x hidden_size\n", " hidden_size: 256 # TODO: Increase to 512 for larger data.\n", " ff_size: 1024 # TODO: Increase to 2048 for larger data.\n", " dropout: 0.3\n", "\"\"\".format(name=name, gdrive_path=os.environ[\"gdrive_path\"], source_language=source_language, target_language=target_language, model_temp_dir=model_temp_dir, last_checkpoint=last_checkpoint)\n", "with open(\"joeynmt/configs/transformer_{name}.yaml\".format(name=name),'w') as f:\n", " f.write(config)" ], "execution_count": 0, "outputs": [] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "pIifxE3Qzuvs" }, "source": [ "# Train the Model\n", "\n", "This single line of joeynmt runs the training using the config we made above" ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "6ZBPFwT94WpI", "outputId": "c74f6e9c-e050-489b-a89f-9d4f32ac7a28", "colab": { "base_uri": "https://localhost:8080/", "height": 1000 } }, "source": [ "# Train the model\n", "# You can press Ctrl-C to stop. And then run the next cell to save your checkpoints! \n", "!cd joeynmt; python3 -m joeynmt train configs/transformer_$src$tgt.yaml" ], "execution_count": 9, "outputs": [ { "output_type": "stream", "text": [ "2020-04-12 09:02:47,813 Hello! This is Joey-NMT.\n", "2020-04-12 09:02:47.955382: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\n", "2020-04-12 09:02:49,269 Total params: 12188160\n", "2020-04-12 09:02:49,271 Trainable parameters: ['decoder.layer_norm.bias', 'decoder.layer_norm.weight', 'decoder.layers.0.dec_layer_norm.bias', 'decoder.layers.0.dec_layer_norm.weight', 'decoder.layers.0.feed_forward.layer_norm.bias', 'decoder.layers.0.feed_forward.layer_norm.weight', 'decoder.layers.0.feed_forward.pwff_layer.0.bias', 'decoder.layers.0.feed_forward.pwff_layer.0.weight', 'decoder.layers.0.feed_forward.pwff_layer.3.bias', 'decoder.layers.0.feed_forward.pwff_layer.3.weight', 'decoder.layers.0.src_trg_att.k_layer.bias', 'decoder.layers.0.src_trg_att.k_layer.weight', 'decoder.layers.0.src_trg_att.output_layer.bias', 'decoder.layers.0.src_trg_att.output_layer.weight', 'decoder.layers.0.src_trg_att.q_layer.bias', 'decoder.layers.0.src_trg_att.q_layer.weight', 'decoder.layers.0.src_trg_att.v_layer.bias', 'decoder.layers.0.src_trg_att.v_layer.weight', 'decoder.layers.0.trg_trg_att.k_layer.bias', 'decoder.layers.0.trg_trg_att.k_layer.weight', 'decoder.layers.0.trg_trg_att.output_layer.bias', 'decoder.layers.0.trg_trg_att.output_layer.weight', 'decoder.layers.0.trg_trg_att.q_layer.bias', 'decoder.layers.0.trg_trg_att.q_layer.weight', 'decoder.layers.0.trg_trg_att.v_layer.bias', 'decoder.layers.0.trg_trg_att.v_layer.weight', 'decoder.layers.0.x_layer_norm.bias', 'decoder.layers.0.x_layer_norm.weight', 'decoder.layers.1.dec_layer_norm.bias', 'decoder.layers.1.dec_layer_norm.weight', 'decoder.layers.1.feed_forward.layer_norm.bias', 'decoder.layers.1.feed_forward.layer_norm.weight', 'decoder.layers.1.feed_forward.pwff_layer.0.bias', 'decoder.layers.1.feed_forward.pwff_layer.0.weight', 'decoder.layers.1.feed_forward.pwff_layer.3.bias', 'decoder.layers.1.feed_forward.pwff_layer.3.weight', 'decoder.layers.1.src_trg_att.k_layer.bias', 'decoder.layers.1.src_trg_att.k_layer.weight', 'decoder.layers.1.src_trg_att.output_layer.bias', 'decoder.layers.1.src_trg_att.output_layer.weight', 'decoder.layers.1.src_trg_att.q_layer.bias', 'decoder.layers.1.src_trg_att.q_layer.weight', 'decoder.layers.1.src_trg_att.v_layer.bias', 'decoder.layers.1.src_trg_att.v_layer.weight', 'decoder.layers.1.trg_trg_att.k_layer.bias', 'decoder.layers.1.trg_trg_att.k_layer.weight', 'decoder.layers.1.trg_trg_att.output_layer.bias', 'decoder.layers.1.trg_trg_att.output_layer.weight', 'decoder.layers.1.trg_trg_att.q_layer.bias', 'decoder.layers.1.trg_trg_att.q_layer.weight', 'decoder.layers.1.trg_trg_att.v_layer.bias', 'decoder.layers.1.trg_trg_att.v_layer.weight', 'decoder.layers.1.x_layer_norm.bias', 'decoder.layers.1.x_layer_norm.weight', 'decoder.layers.2.dec_layer_norm.bias', 'decoder.layers.2.dec_layer_norm.weight', 'decoder.layers.2.feed_forward.layer_norm.bias', 'decoder.layers.2.feed_forward.layer_norm.weight', 'decoder.layers.2.feed_forward.pwff_layer.0.bias', 'decoder.layers.2.feed_forward.pwff_layer.0.weight', 'decoder.layers.2.feed_forward.pwff_layer.3.bias', 'decoder.layers.2.feed_forward.pwff_layer.3.weight', 'decoder.layers.2.src_trg_att.k_layer.bias', 'decoder.layers.2.src_trg_att.k_layer.weight', 'decoder.layers.2.src_trg_att.output_layer.bias', 'decoder.layers.2.src_trg_att.output_layer.weight', 'decoder.layers.2.src_trg_att.q_layer.bias', 'decoder.layers.2.src_trg_att.q_layer.weight', 'decoder.layers.2.src_trg_att.v_layer.bias', 'decoder.layers.2.src_trg_att.v_layer.weight', 'decoder.layers.2.trg_trg_att.k_layer.bias', 'decoder.layers.2.trg_trg_att.k_layer.weight', 'decoder.layers.2.trg_trg_att.output_layer.bias', 'decoder.layers.2.trg_trg_att.output_layer.weight', 'decoder.layers.2.trg_trg_att.q_layer.bias', 'decoder.layers.2.trg_trg_att.q_layer.weight', 'decoder.layers.2.trg_trg_att.v_layer.bias', 'decoder.layers.2.trg_trg_att.v_layer.weight', 'decoder.layers.2.x_layer_norm.bias', 'decoder.layers.2.x_layer_norm.weight', 'decoder.layers.3.dec_layer_norm.bias', 'decoder.layers.3.dec_layer_norm.weight', 'decoder.layers.3.feed_forward.layer_norm.bias', 'decoder.layers.3.feed_forward.layer_norm.weight', 'decoder.layers.3.feed_forward.pwff_layer.0.bias', 'decoder.layers.3.feed_forward.pwff_layer.0.weight', 'decoder.layers.3.feed_forward.pwff_layer.3.bias', 'decoder.layers.3.feed_forward.pwff_layer.3.weight', 'decoder.layers.3.src_trg_att.k_layer.bias', 'decoder.layers.3.src_trg_att.k_layer.weight', 'decoder.layers.3.src_trg_att.output_layer.bias', 'decoder.layers.3.src_trg_att.output_layer.weight', 'decoder.layers.3.src_trg_att.q_layer.bias', 'decoder.layers.3.src_trg_att.q_layer.weight', 'decoder.layers.3.src_trg_att.v_layer.bias', 'decoder.layers.3.src_trg_att.v_layer.weight', 'decoder.layers.3.trg_trg_att.k_layer.bias', 'decoder.layers.3.trg_trg_att.k_layer.weight', 'decoder.layers.3.trg_trg_att.output_layer.bias', 'decoder.layers.3.trg_trg_att.output_layer.weight', 'decoder.layers.3.trg_trg_att.q_layer.bias', 'decoder.layers.3.trg_trg_att.q_layer.weight', 'decoder.layers.3.trg_trg_att.v_layer.bias', 'decoder.layers.3.trg_trg_att.v_layer.weight', 'decoder.layers.3.x_layer_norm.bias', 'decoder.layers.3.x_layer_norm.weight', 'decoder.layers.4.dec_layer_norm.bias', 'decoder.layers.4.dec_layer_norm.weight', 'decoder.layers.4.feed_forward.layer_norm.bias', 'decoder.layers.4.feed_forward.layer_norm.weight', 'decoder.layers.4.feed_forward.pwff_layer.0.bias', 'decoder.layers.4.feed_forward.pwff_layer.0.weight', 'decoder.layers.4.feed_forward.pwff_layer.3.bias', 'decoder.layers.4.feed_forward.pwff_layer.3.weight', 'decoder.layers.4.src_trg_att.k_layer.bias', 'decoder.layers.4.src_trg_att.k_layer.weight', 'decoder.layers.4.src_trg_att.output_layer.bias', 'decoder.layers.4.src_trg_att.output_layer.weight', 'decoder.layers.4.src_trg_att.q_layer.bias', 'decoder.layers.4.src_trg_att.q_layer.weight', 'decoder.layers.4.src_trg_att.v_layer.bias', 'decoder.layers.4.src_trg_att.v_layer.weight', 'decoder.layers.4.trg_trg_att.k_layer.bias', 'decoder.layers.4.trg_trg_att.k_layer.weight', 'decoder.layers.4.trg_trg_att.output_layer.bias', 'decoder.layers.4.trg_trg_att.output_layer.weight', 'decoder.layers.4.trg_trg_att.q_layer.bias', 'decoder.layers.4.trg_trg_att.q_layer.weight', 'decoder.layers.4.trg_trg_att.v_layer.bias', 'decoder.layers.4.trg_trg_att.v_layer.weight', 'decoder.layers.4.x_layer_norm.bias', 'decoder.layers.4.x_layer_norm.weight', 'decoder.layers.5.dec_layer_norm.bias', 'decoder.layers.5.dec_layer_norm.weight', 'decoder.layers.5.feed_forward.layer_norm.bias', 'decoder.layers.5.feed_forward.layer_norm.weight', 'decoder.layers.5.feed_forward.pwff_layer.0.bias', 'decoder.layers.5.feed_forward.pwff_layer.0.weight', 'decoder.layers.5.feed_forward.pwff_layer.3.bias', 'decoder.layers.5.feed_forward.pwff_layer.3.weight', 'decoder.layers.5.src_trg_att.k_layer.bias', 'decoder.layers.5.src_trg_att.k_layer.weight', 'decoder.layers.5.src_trg_att.output_layer.bias', 'decoder.layers.5.src_trg_att.output_layer.weight', 'decoder.layers.5.src_trg_att.q_layer.bias', 'decoder.layers.5.src_trg_att.q_layer.weight', 'decoder.layers.5.src_trg_att.v_layer.bias', 'decoder.layers.5.src_trg_att.v_layer.weight', 'decoder.layers.5.trg_trg_att.k_layer.bias', 'decoder.layers.5.trg_trg_att.k_layer.weight', 'decoder.layers.5.trg_trg_att.output_layer.bias', 'decoder.layers.5.trg_trg_att.output_layer.weight', 'decoder.layers.5.trg_trg_att.q_layer.bias', 'decoder.layers.5.trg_trg_att.q_layer.weight', 'decoder.layers.5.trg_trg_att.v_layer.bias', 'decoder.layers.5.trg_trg_att.v_layer.weight', 'decoder.layers.5.x_layer_norm.bias', 'decoder.layers.5.x_layer_norm.weight', 'encoder.layer_norm.bias', 'encoder.layer_norm.weight', 'encoder.layers.0.feed_forward.layer_norm.bias', 'encoder.layers.0.feed_forward.layer_norm.weight', 'encoder.layers.0.feed_forward.pwff_layer.0.bias', 'encoder.layers.0.feed_forward.pwff_layer.0.weight', 'encoder.layers.0.feed_forward.pwff_layer.3.bias', 'encoder.layers.0.feed_forward.pwff_layer.3.weight', 'encoder.layers.0.layer_norm.bias', 'encoder.layers.0.layer_norm.weight', 'encoder.layers.0.src_src_att.k_layer.bias', 'encoder.layers.0.src_src_att.k_layer.weight', 'encoder.layers.0.src_src_att.output_layer.bias', 'encoder.layers.0.src_src_att.output_layer.weight', 'encoder.layers.0.src_src_att.q_layer.bias', 'encoder.layers.0.src_src_att.q_layer.weight', 'encoder.layers.0.src_src_att.v_layer.bias', 'encoder.layers.0.src_src_att.v_layer.weight', 'encoder.layers.1.feed_forward.layer_norm.bias', 'encoder.layers.1.feed_forward.layer_norm.weight', 'encoder.layers.1.feed_forward.pwff_layer.0.bias', 'encoder.layers.1.feed_forward.pwff_layer.0.weight', 'encoder.layers.1.feed_forward.pwff_layer.3.bias', 'encoder.layers.1.feed_forward.pwff_layer.3.weight', 'encoder.layers.1.layer_norm.bias', 'encoder.layers.1.layer_norm.weight', 'encoder.layers.1.src_src_att.k_layer.bias', 'encoder.layers.1.src_src_att.k_layer.weight', 'encoder.layers.1.src_src_att.output_layer.bias', 'encoder.layers.1.src_src_att.output_layer.weight', 'encoder.layers.1.src_src_att.q_layer.bias', 'encoder.layers.1.src_src_att.q_layer.weight', 'encoder.layers.1.src_src_att.v_layer.bias', 'encoder.layers.1.src_src_att.v_layer.weight', 'encoder.layers.2.feed_forward.layer_norm.bias', 'encoder.layers.2.feed_forward.layer_norm.weight', 'encoder.layers.2.feed_forward.pwff_layer.0.bias', 'encoder.layers.2.feed_forward.pwff_layer.0.weight', 'encoder.layers.2.feed_forward.pwff_layer.3.bias', 'encoder.layers.2.feed_forward.pwff_layer.3.weight', 'encoder.layers.2.layer_norm.bias', 'encoder.layers.2.layer_norm.weight', 'encoder.layers.2.src_src_att.k_layer.bias', 'encoder.layers.2.src_src_att.k_layer.weight', 'encoder.layers.2.src_src_att.output_layer.bias', 'encoder.layers.2.src_src_att.output_layer.weight', 'encoder.layers.2.src_src_att.q_layer.bias', 'encoder.layers.2.src_src_att.q_layer.weight', 'encoder.layers.2.src_src_att.v_layer.bias', 'encoder.layers.2.src_src_att.v_layer.weight', 'encoder.layers.3.feed_forward.layer_norm.bias', 'encoder.layers.3.feed_forward.layer_norm.weight', 'encoder.layers.3.feed_forward.pwff_layer.0.bias', 'encoder.layers.3.feed_forward.pwff_layer.0.weight', 'encoder.layers.3.feed_forward.pwff_layer.3.bias', 'encoder.layers.3.feed_forward.pwff_layer.3.weight', 'encoder.layers.3.layer_norm.bias', 'encoder.layers.3.layer_norm.weight', 'encoder.layers.3.src_src_att.k_layer.bias', 'encoder.layers.3.src_src_att.k_layer.weight', 'encoder.layers.3.src_src_att.output_layer.bias', 'encoder.layers.3.src_src_att.output_layer.weight', 'encoder.layers.3.src_src_att.q_layer.bias', 'encoder.layers.3.src_src_att.q_layer.weight', 'encoder.layers.3.src_src_att.v_layer.bias', 'encoder.layers.3.src_src_att.v_layer.weight', 'encoder.layers.4.feed_forward.layer_norm.bias', 'encoder.layers.4.feed_forward.layer_norm.weight', 'encoder.layers.4.feed_forward.pwff_layer.0.bias', 'encoder.layers.4.feed_forward.pwff_layer.0.weight', 'encoder.layers.4.feed_forward.pwff_layer.3.bias', 'encoder.layers.4.feed_forward.pwff_layer.3.weight', 'encoder.layers.4.layer_norm.bias', 'encoder.layers.4.layer_norm.weight', 'encoder.layers.4.src_src_att.k_layer.bias', 'encoder.layers.4.src_src_att.k_layer.weight', 'encoder.layers.4.src_src_att.output_layer.bias', 'encoder.layers.4.src_src_att.output_layer.weight', 'encoder.layers.4.src_src_att.q_layer.bias', 'encoder.layers.4.src_src_att.q_layer.weight', 'encoder.layers.4.src_src_att.v_layer.bias', 'encoder.layers.4.src_src_att.v_layer.weight', 'encoder.layers.5.feed_forward.layer_norm.bias', 'encoder.layers.5.feed_forward.layer_norm.weight', 'encoder.layers.5.feed_forward.pwff_layer.0.bias', 'encoder.layers.5.feed_forward.pwff_layer.0.weight', 'encoder.layers.5.feed_forward.pwff_layer.3.bias', 'encoder.layers.5.feed_forward.pwff_layer.3.weight', 'encoder.layers.5.layer_norm.bias', 'encoder.layers.5.layer_norm.weight', 'encoder.layers.5.src_src_att.k_layer.bias', 'encoder.layers.5.src_src_att.k_layer.weight', 'encoder.layers.5.src_src_att.output_layer.bias', 'encoder.layers.5.src_src_att.output_layer.weight', 'encoder.layers.5.src_src_att.q_layer.bias', 'encoder.layers.5.src_src_att.q_layer.weight', 'encoder.layers.5.src_src_att.v_layer.bias', 'encoder.layers.5.src_src_att.v_layer.weight', 'src_embed.lut.weight']\n", "2020-04-12 09:03:04,600 Loading model from /content/drive/My Drive/masakhane/yo-en-baseline/models/yoen_transformer/best.ckpt\n", "2020-04-12 09:03:08,860 cfg.name : yoen_transformer\n", "2020-04-12 09:03:08,861 cfg.data.src : yo\n", "2020-04-12 09:03:08,861 cfg.data.trg : en\n", "2020-04-12 09:03:08,861 cfg.data.train : /content/drive/My Drive/masakhane/yo-en-baseline/train.bpe\n", "2020-04-12 09:03:08,861 cfg.data.dev : /content/drive/My Drive/masakhane/yo-en-baseline/dev.bpe\n", "2020-04-12 09:03:08,861 cfg.data.test : /content/drive/My Drive/masakhane/yo-en-baseline/test.bpe\n", "2020-04-12 09:03:08,861 cfg.data.level : bpe\n", "2020-04-12 09:03:08,862 cfg.data.lowercase : False\n", "2020-04-12 09:03:08,862 cfg.data.max_sent_length : 100\n", "2020-04-12 09:03:08,862 cfg.data.src_vocab : /content/drive/My Drive/masakhane/yo-en-baseline/vocab.txt\n", "2020-04-12 09:03:08,862 cfg.data.trg_vocab : /content/drive/My Drive/masakhane/yo-en-baseline/vocab.txt\n", "2020-04-12 09:03:08,862 cfg.testing.beam_size : 5\n", "2020-04-12 09:03:08,862 cfg.testing.alpha : 1.0\n", "2020-04-12 09:03:08,862 cfg.training.load_model : /content/drive/My Drive/masakhane/yo-en-baseline/models/yoen_transformer/best.ckpt\n", "2020-04-12 09:03:08,863 cfg.training.random_seed : 42\n", "2020-04-12 09:03:08,863 cfg.training.optimizer : adam\n", "2020-04-12 09:03:08,863 cfg.training.normalization : tokens\n", "2020-04-12 09:03:08,863 cfg.training.adam_betas : [0.9, 0.999]\n", "2020-04-12 09:03:08,863 cfg.training.scheduling : plateau\n", "2020-04-12 09:03:08,863 cfg.training.patience : 5\n", "2020-04-12 09:03:08,863 cfg.training.learning_rate_factor : 0.5\n", "2020-04-12 09:03:08,864 cfg.training.learning_rate_warmup : 1000\n", "2020-04-12 09:03:08,864 cfg.training.decrease_factor : 0.7\n", "2020-04-12 09:03:08,864 cfg.training.loss : crossentropy\n", "2020-04-12 09:03:08,864 cfg.training.learning_rate : 0.0003\n", "2020-04-12 09:03:08,864 cfg.training.learning_rate_min : 1e-08\n", "2020-04-12 09:03:08,864 cfg.training.weight_decay : 0.0\n", "2020-04-12 09:03:08,864 cfg.training.label_smoothing : 0.1\n", "2020-04-12 09:03:08,865 cfg.training.batch_size : 4096\n", "2020-04-12 09:03:08,865 cfg.training.batch_type : token\n", "2020-04-12 09:03:08,865 cfg.training.eval_batch_size : 3600\n", "2020-04-12 09:03:08,865 cfg.training.eval_batch_type : token\n", "2020-04-12 09:03:08,865 cfg.training.batch_multiplier : 1\n", "2020-04-12 09:03:08,865 cfg.training.early_stopping_metric : ppl\n", "2020-04-12 09:03:08,865 cfg.training.epochs : 3\n", "2020-04-12 09:03:08,865 cfg.training.validation_freq : 1000\n", "2020-04-12 09:03:08,866 cfg.training.logging_freq : 100\n", "2020-04-12 09:03:08,866 cfg.training.eval_metric : bleu\n", "2020-04-12 09:03:08,866 cfg.training.model_dir : /content/drive/My Drive/masakhane/model-temp\n", "2020-04-12 09:03:08,866 cfg.training.overwrite : True\n", "2020-04-12 09:03:08,866 cfg.training.shuffle : True\n", "2020-04-12 09:03:08,866 cfg.training.use_cuda : True\n", "2020-04-12 09:03:08,866 cfg.training.max_output_length : 100\n", "2020-04-12 09:03:08,867 cfg.training.print_valid_sents : [0, 1, 2, 3]\n", "2020-04-12 09:03:08,867 cfg.training.keep_last_ckpts : 3\n", "2020-04-12 09:03:08,867 cfg.model.initializer : xavier\n", "2020-04-12 09:03:08,867 cfg.model.bias_initializer : zeros\n", "2020-04-12 09:03:08,867 cfg.model.init_gain : 1.0\n", "2020-04-12 09:03:08,867 cfg.model.embed_initializer : xavier\n", "2020-04-12 09:03:08,867 cfg.model.embed_init_gain : 1.0\n", "2020-04-12 09:03:08,867 cfg.model.tied_embeddings : True\n", "2020-04-12 09:03:08,868 cfg.model.tied_softmax : True\n", "2020-04-12 09:03:08,868 cfg.model.encoder.type : transformer\n", "2020-04-12 09:03:08,868 cfg.model.encoder.num_layers : 6\n", "2020-04-12 09:03:08,868 cfg.model.encoder.num_heads : 4\n", "2020-04-12 09:03:08,868 cfg.model.encoder.embeddings.embedding_dim : 256\n", "2020-04-12 09:03:08,868 cfg.model.encoder.embeddings.scale : True\n", "2020-04-12 09:03:08,868 cfg.model.encoder.embeddings.dropout : 0.2\n", "2020-04-12 09:03:08,869 cfg.model.encoder.hidden_size : 256\n", "2020-04-12 09:03:08,869 cfg.model.encoder.ff_size : 1024\n", "2020-04-12 09:03:08,869 cfg.model.encoder.dropout : 0.3\n", "2020-04-12 09:03:08,869 cfg.model.decoder.type : transformer\n", "2020-04-12 09:03:08,869 cfg.model.decoder.num_layers : 6\n", "2020-04-12 09:03:08,869 cfg.model.decoder.num_heads : 4\n", "2020-04-12 09:03:08,869 cfg.model.decoder.embeddings.embedding_dim : 256\n", "2020-04-12 09:03:08,869 cfg.model.decoder.embeddings.scale : True\n", "2020-04-12 09:03:08,870 cfg.model.decoder.embeddings.dropout : 0.2\n", "2020-04-12 09:03:08,870 cfg.model.decoder.hidden_size : 256\n", "2020-04-12 09:03:08,870 cfg.model.decoder.ff_size : 1024\n", "2020-04-12 09:03:08,870 cfg.model.decoder.dropout : 0.3\n", "2020-04-12 09:03:08,870 Data set sizes: \n", "\ttrain 417478,\n", "\tvalid 1000,\n", "\ttest 2662\n", "2020-04-12 09:03:08,870 First training example:\n", "\t[SRC] Òun ló ń jẹ́ káwọn èèyàn dá wa mọ̀ pé ọmọlẹ́yìn Jésù ni wá àti pé à ń fara wé Jèhófà , Ọlọ́run ìfẹ́ .\n", "\t[TRG] It identi@@ fi@@ es us as Jesus ’ followers and as im@@ it@@ at@@ ors of Jehovah , the S@@ ource of love .\n", "2020-04-12 09:03:08,871 First 10 words (src): (0) (1) (2) (3) (4) , (5) . (6) the (7) a (8) tó (9) to\n", "2020-04-12 09:03:08,871 First 10 words (trg): (0) (1) (2) (3) (4) , (5) . (6) the (7) a (8) tó (9) to\n", "2020-04-12 09:03:08,871 Number of Src words (types): 4406\n", "2020-04-12 09:03:08,871 Number of Trg words (types): 4406\n", "2020-04-12 09:03:08,871 Model(\n", "\tencoder=TransformerEncoder(num_layers=6, num_heads=4),\n", "\tdecoder=TransformerDecoder(num_layers=6, num_heads=4),\n", "\tsrc_embed=Embeddings(embedding_dim=256, vocab_size=4406),\n", "\ttrg_embed=Embeddings(embedding_dim=256, vocab_size=4406))\n", "2020-04-12 09:03:08,880 EPOCH 1\n", "2020-04-12 09:03:20,744 Epoch 1 Step: 277100 Batch Loss: 1.684694 Tokens per Sec: 18278, Lr: 0.000035\n", "2020-04-12 09:03:31,672 Epoch 1 Step: 277200 Batch Loss: 1.575451 Tokens per Sec: 19201, Lr: 0.000035\n", "2020-04-12 09:03:42,643 Epoch 1 Step: 277300 Batch Loss: 1.831752 Tokens per Sec: 19922, Lr: 0.000035\n", "2020-04-12 09:03:53,652 Epoch 1 Step: 277400 Batch Loss: 1.551102 Tokens per Sec: 19563, Lr: 0.000035\n", "2020-04-12 09:04:04,662 Epoch 1 Step: 277500 Batch Loss: 1.590573 Tokens per Sec: 19878, Lr: 0.000035\n", "2020-04-12 09:04:15,548 Epoch 1 Step: 277600 Batch Loss: 1.697411 Tokens per Sec: 19356, Lr: 0.000035\n", "2020-04-12 09:04:26,559 Epoch 1 Step: 277700 Batch Loss: 1.532215 Tokens per Sec: 19551, Lr: 0.000035\n", "2020-04-12 09:04:37,697 Epoch 1 Step: 277800 Batch Loss: 1.569958 Tokens per Sec: 19659, Lr: 0.000035\n", "2020-04-12 09:04:48,744 Epoch 1 Step: 277900 Batch Loss: 1.808574 Tokens per Sec: 19549, Lr: 0.000035\n", "2020-04-12 09:04:59,585 Epoch 1 Step: 278000 Batch Loss: 1.552910 Tokens per Sec: 18920, Lr: 0.000035\n", "2020-04-12 09:05:13,761 Example #0\n", "2020-04-12 09:05:13,762 \tSource: Ìlú Áńtíókù , tá a ti kọ́kọ́ pe àwọn ọmọ ẹ̀yìn Jésù ní Kristẹni , ni Pọ́ọ̀lù ti bẹ̀rẹ̀ ìrìn àjò míṣọ́nnárì rẹ̀ àkọ́kọ́ .\n", "2020-04-12 09:05:13,762 \tReference: On his first missionary tour , he started from Antioch , where Jesus ’ followers were first called Christians .\n", "2020-04-12 09:05:13,762 \tHypothesis: Paul began his first missionary journey in Antioch , who was called Christians .\n", "2020-04-12 09:05:13,762 Example #1\n", "2020-04-12 09:05:13,763 \tSource: Torí pé mò ń tẹ̀ lé àwọn ìlànà Bíbélì ló jẹ́ kí n wà láàyè títí dòní\n", "2020-04-12 09:05:13,763 \tReference: I am alive today because of applying Bible principles\n", "2020-04-12 09:05:13,763 \tHypothesis: Because I live by Bible principles , I am living forever\n", "2020-04-12 09:05:13,763 Example #2\n", "2020-04-12 09:05:13,763 \tSource: Àjíǹde àkọ́kọ́ tí Bíbélì mẹ́nu kàn nìyẹn .\n", "2020-04-12 09:05:13,763 \tReference: That was the first resurrection of Bible record .\n", "2020-04-12 09:05:13,764 \tHypothesis: That is the first resurrection mentioned in the Bible .\n", "2020-04-12 09:05:13,764 Example #3\n", "2020-04-12 09:05:13,764 \tSource: Bíi ti afinimọ̀nà tá a mẹ́nu kàn nínú àpèjúwe wa yẹn , Jèhófà sọ pé òun máa ran àwọn tó bá fẹ́ bá òun rìn lọ́wọ́ , ó sì ní kí wọ́n wá bá òun dọ́rẹ̀ẹ́ .\n", "2020-04-12 09:05:13,764 \tReference: Like the guide in our illustration , Jehovah kindly extends his helping hand and his friendship to those who seek to walk with him .\n", "2020-04-12 09:05:13,764 \tHypothesis: Like the guidance mentioned in our illustration , Jehovah said that he would help those who want to walk with him and invite them to come to him .\n", "2020-04-12 09:05:13,764 Validation result (greedy) at epoch 1, step 278000: bleu: 30.10, loss: 40294.0938, ppl: 4.6314, duration: 14.1785s\n", "2020-04-12 09:05:24,789 Epoch 1 Step: 278100 Batch Loss: 1.591986 Tokens per Sec: 19606, Lr: 0.000035\n", "2020-04-12 09:05:35,817 Epoch 1 Step: 278200 Batch Loss: 1.616184 Tokens per Sec: 19879, Lr: 0.000035\n", "2020-04-12 09:05:46,721 Epoch 1 Step: 278300 Batch Loss: 1.792909 Tokens per Sec: 19426, Lr: 0.000035\n", "2020-04-12 09:05:57,620 Epoch 1 Step: 278400 Batch Loss: 1.666043 Tokens per Sec: 19646, Lr: 0.000035\n", "2020-04-12 09:06:08,522 Epoch 1 Step: 278500 Batch Loss: 1.544160 Tokens per Sec: 19366, Lr: 0.000035\n", "2020-04-12 09:06:19,550 Epoch 1 Step: 278600 Batch Loss: 1.615016 Tokens per Sec: 19944, Lr: 0.000035\n", "2020-04-12 09:06:30,479 Epoch 1 Step: 278700 Batch Loss: 1.536902 Tokens per Sec: 19445, Lr: 0.000035\n", "2020-04-12 09:06:41,434 Epoch 1 Step: 278800 Batch Loss: 1.844232 Tokens per Sec: 19658, Lr: 0.000035\n", "2020-04-12 09:06:52,339 Epoch 1 Step: 278900 Batch Loss: 1.603102 Tokens per Sec: 19453, Lr: 0.000035\n", "2020-04-12 09:07:03,150 Epoch 1 Step: 279000 Batch Loss: 1.532111 Tokens per Sec: 19580, Lr: 0.000035\n", "2020-04-12 09:07:16,849 Example #0\n", "2020-04-12 09:07:16,850 \tSource: Ìlú Áńtíókù , tá a ti kọ́kọ́ pe àwọn ọmọ ẹ̀yìn Jésù ní Kristẹni , ni Pọ́ọ̀lù ti bẹ̀rẹ̀ ìrìn àjò míṣọ́nnárì rẹ̀ àkọ́kọ́ .\n", "2020-04-12 09:07:16,850 \tReference: On his first missionary tour , he started from Antioch , where Jesus ’ followers were first called Christians .\n", "2020-04-12 09:07:16,850 \tHypothesis: Paul began his first missionary journey in Antioch , who was called Christians .\n", "2020-04-12 09:07:16,851 Example #1\n", "2020-04-12 09:07:16,851 \tSource: Torí pé mò ń tẹ̀ lé àwọn ìlànà Bíbélì ló jẹ́ kí n wà láàyè títí dòní\n", "2020-04-12 09:07:16,851 \tReference: I am alive today because of applying Bible principles\n", "2020-04-12 09:07:16,851 \tHypothesis: Because I live by Bible principles , I am living forever\n", "2020-04-12 09:07:16,851 Example #2\n", "2020-04-12 09:07:16,852 \tSource: Àjíǹde àkọ́kọ́ tí Bíbélì mẹ́nu kàn nìyẹn .\n", "2020-04-12 09:07:16,852 \tReference: That was the first resurrection of Bible record .\n", "2020-04-12 09:07:16,852 \tHypothesis: That is the first resurrection mentioned in the Bible .\n", "2020-04-12 09:07:16,852 Example #3\n", "2020-04-12 09:07:16,852 \tSource: Bíi ti afinimọ̀nà tá a mẹ́nu kàn nínú àpèjúwe wa yẹn , Jèhófà sọ pé òun máa ran àwọn tó bá fẹ́ bá òun rìn lọ́wọ́ , ó sì ní kí wọ́n wá bá òun dọ́rẹ̀ẹ́ .\n", "2020-04-12 09:07:16,852 \tReference: Like the guide in our illustration , Jehovah kindly extends his helping hand and his friendship to those who seek to walk with him .\n", "2020-04-12 09:07:16,853 \tHypothesis: Like the guidance mentioned in our illustration , Jehovah said that he would help those who want to walk with him and invite them to come to him .\n", "2020-04-12 09:07:16,853 Validation result (greedy) at epoch 1, step 279000: bleu: 30.26, loss: 40277.5742, ppl: 4.6285, duration: 13.7022s\n", "2020-04-12 09:07:27,813 Epoch 1 Step: 279100 Batch Loss: 1.613816 Tokens per Sec: 19430, Lr: 0.000035\n", "2020-04-12 09:07:38,840 Epoch 1 Step: 279200 Batch Loss: 1.685256 Tokens per Sec: 19967, Lr: 0.000035\n", "2020-04-12 09:07:49,711 Epoch 1 Step: 279300 Batch Loss: 1.668454 Tokens per Sec: 19161, Lr: 0.000035\n", "2020-04-12 09:08:00,622 Epoch 1 Step: 279400 Batch Loss: 1.588058 Tokens per Sec: 19545, Lr: 0.000035\n", "2020-04-12 09:08:11,499 Epoch 1 Step: 279500 Batch Loss: 1.709849 Tokens per Sec: 19327, Lr: 0.000035\n", "2020-04-12 09:08:22,413 Epoch 1 Step: 279600 Batch Loss: 1.706573 Tokens per Sec: 19491, Lr: 0.000035\n", "2020-04-12 09:08:33,276 Epoch 1 Step: 279700 Batch Loss: 1.629491 Tokens per Sec: 19478, Lr: 0.000035\n", "2020-04-12 09:08:44,202 Epoch 1 Step: 279800 Batch Loss: 1.602718 Tokens per Sec: 19679, Lr: 0.000035\n", "2020-04-12 09:08:55,153 Epoch 1 Step: 279900 Batch Loss: 1.732164 Tokens per Sec: 19128, Lr: 0.000035\n", "2020-04-12 09:09:06,108 Epoch 1 Step: 280000 Batch Loss: 1.835576 Tokens per Sec: 19435, Lr: 0.000035\n", "2020-04-12 09:09:19,524 Example #0\n", "2020-04-12 09:09:19,524 \tSource: Ìlú Áńtíókù , tá a ti kọ́kọ́ pe àwọn ọmọ ẹ̀yìn Jésù ní Kristẹni , ni Pọ́ọ̀lù ti bẹ̀rẹ̀ ìrìn àjò míṣọ́nnárì rẹ̀ àkọ́kọ́ .\n", "2020-04-12 09:09:19,524 \tReference: On his first missionary tour , he started from Antioch , where Jesus ’ followers were first called Christians .\n", "2020-04-12 09:09:19,525 \tHypothesis: Paul began his first missionary journey in Antioch , who was called Christians .\n", "2020-04-12 09:09:19,525 Example #1\n", "2020-04-12 09:09:19,525 \tSource: Torí pé mò ń tẹ̀ lé àwọn ìlànà Bíbélì ló jẹ́ kí n wà láàyè títí dòní\n", "2020-04-12 09:09:19,525 \tReference: I am alive today because of applying Bible principles\n", "2020-04-12 09:09:19,525 \tHypothesis: Because I live by Bible principles , I am living forever\n", "2020-04-12 09:09:19,526 Example #2\n", "2020-04-12 09:09:19,526 \tSource: Àjíǹde àkọ́kọ́ tí Bíbélì mẹ́nu kàn nìyẹn .\n", "2020-04-12 09:09:19,526 \tReference: That was the first resurrection of Bible record .\n", "2020-04-12 09:09:19,526 \tHypothesis: That is the first resurrection mentioned in the Bible .\n", "2020-04-12 09:09:19,526 Example #3\n", "2020-04-12 09:09:19,527 \tSource: Bíi ti afinimọ̀nà tá a mẹ́nu kàn nínú àpèjúwe wa yẹn , Jèhófà sọ pé òun máa ran àwọn tó bá fẹ́ bá òun rìn lọ́wọ́ , ó sì ní kí wọ́n wá bá òun dọ́rẹ̀ẹ́ .\n", "2020-04-12 09:09:19,527 \tReference: Like the guide in our illustration , Jehovah kindly extends his helping hand and his friendship to those who seek to walk with him .\n", "2020-04-12 09:09:19,527 \tHypothesis: Like the guidance mentioned in our illustration , Jehovah said that he would help those who want to walk with him and invite them to come to him .\n", "2020-04-12 09:09:19,527 Validation result (greedy) at epoch 1, step 280000: bleu: 30.28, loss: 40215.3828, ppl: 4.6175, duration: 13.4182s\n", "2020-04-12 09:09:30,263 Epoch 1 Step: 280100 Batch Loss: 1.673205 Tokens per Sec: 19531, Lr: 0.000035\n", "2020-04-12 09:09:41,041 Epoch 1 Step: 280200 Batch Loss: 1.556396 Tokens per Sec: 19064, Lr: 0.000035\n", "2020-04-12 09:09:51,938 Epoch 1 Step: 280300 Batch Loss: 1.802814 Tokens per Sec: 19651, Lr: 0.000035\n", "2020-04-12 09:10:02,793 Epoch 1 Step: 280400 Batch Loss: 1.666098 Tokens per Sec: 19833, Lr: 0.000035\n", "2020-04-12 09:10:13,856 Epoch 1 Step: 280500 Batch Loss: 1.619582 Tokens per Sec: 20011, Lr: 0.000035\n", "2020-04-12 09:10:24,745 Epoch 1 Step: 280600 Batch Loss: 1.563052 Tokens per Sec: 19604, Lr: 0.000035\n", "2020-04-12 09:10:35,732 Epoch 1 Step: 280700 Batch Loss: 1.333327 Tokens per Sec: 19840, Lr: 0.000035\n", "2020-04-12 09:10:46,743 Epoch 1 Step: 280800 Batch Loss: 1.567659 Tokens per Sec: 19942, Lr: 0.000035\n", "2020-04-12 09:10:57,748 Epoch 1 Step: 280900 Batch Loss: 1.563722 Tokens per Sec: 20012, Lr: 0.000035\n", "2020-04-12 09:11:08,622 Epoch 1 Step: 281000 Batch Loss: 1.655927 Tokens per Sec: 19547, Lr: 0.000035\n", "2020-04-12 09:11:22,275 Example #0\n", "2020-04-12 09:11:22,275 \tSource: Ìlú Áńtíókù , tá a ti kọ́kọ́ pe àwọn ọmọ ẹ̀yìn Jésù ní Kristẹni , ni Pọ́ọ̀lù ti bẹ̀rẹ̀ ìrìn àjò míṣọ́nnárì rẹ̀ àkọ́kọ́ .\n", "2020-04-12 09:11:22,275 \tReference: On his first missionary tour , he started from Antioch , where Jesus ’ followers were first called Christians .\n", "2020-04-12 09:11:22,276 \tHypothesis: Paul began his first missionary journey in Antioch , who was called Christians .\n", "2020-04-12 09:11:22,276 Example #1\n", "2020-04-12 09:11:22,276 \tSource: Torí pé mò ń tẹ̀ lé àwọn ìlànà Bíbélì ló jẹ́ kí n wà láàyè títí dòní\n", "2020-04-12 09:11:22,276 \tReference: I am alive today because of applying Bible principles\n", "2020-04-12 09:11:22,276 \tHypothesis: Because I live by Bible principles , I am living forever\n", "2020-04-12 09:11:22,276 Example #2\n", "2020-04-12 09:11:22,277 \tSource: Àjíǹde àkọ́kọ́ tí Bíbélì mẹ́nu kàn nìyẹn .\n", "2020-04-12 09:11:22,277 \tReference: That was the first resurrection of Bible record .\n", "2020-04-12 09:11:22,277 \tHypothesis: That is the first resurrection mentioned in the Bible .\n", "2020-04-12 09:11:22,277 Example #3\n", "2020-04-12 09:11:22,277 \tSource: Bíi ti afinimọ̀nà tá a mẹ́nu kàn nínú àpèjúwe wa yẹn , Jèhófà sọ pé òun máa ran àwọn tó bá fẹ́ bá òun rìn lọ́wọ́ , ó sì ní kí wọ́n wá bá òun dọ́rẹ̀ẹ́ .\n", "2020-04-12 09:11:22,278 \tReference: Like the guide in our illustration , Jehovah kindly extends his helping hand and his friendship to those who seek to walk with him .\n", "2020-04-12 09:11:22,278 \tHypothesis: Like the guidance mentioned in our illustration , Jehovah said that he would help those who want to walk with him and invite them to come to him .\n", "2020-04-12 09:11:22,278 Validation result (greedy) at epoch 1, step 281000: bleu: 30.13, loss: 40243.1094, ppl: 4.6224, duration: 13.6552s\n", "2020-04-12 09:11:33,282 Epoch 1 Step: 281100 Batch Loss: 1.574655 Tokens per Sec: 19801, Lr: 0.000025\n", "2020-04-12 09:11:44,165 Epoch 1 Step: 281200 Batch Loss: 1.674156 Tokens per Sec: 19369, Lr: 0.000025\n", "2020-04-12 09:11:55,018 Epoch 1 Step: 281300 Batch Loss: 1.646524 Tokens per Sec: 19157, Lr: 0.000025\n", "2020-04-12 09:12:05,801 Epoch 1 Step: 281400 Batch Loss: 1.671875 Tokens per Sec: 19456, Lr: 0.000025\n", "2020-04-12 09:12:16,719 Epoch 1 Step: 281500 Batch Loss: 1.742325 Tokens per Sec: 19578, Lr: 0.000025\n", "2020-04-12 09:12:27,660 Epoch 1 Step: 281600 Batch Loss: 1.605760 Tokens per Sec: 19686, Lr: 0.000025\n", "2020-04-12 09:12:38,399 Epoch 1 Step: 281700 Batch Loss: 1.404157 Tokens per Sec: 19310, Lr: 0.000025\n", "2020-04-12 09:12:49,305 Epoch 1 Step: 281800 Batch Loss: 1.668523 Tokens per Sec: 19598, Lr: 0.000025\n", "2020-04-12 09:13:00,156 Epoch 1 Step: 281900 Batch Loss: 1.530828 Tokens per Sec: 19135, Lr: 0.000025\n", "2020-04-12 09:13:11,209 Epoch 1 Step: 282000 Batch Loss: 1.549292 Tokens per Sec: 19794, Lr: 0.000025\n", "2020-04-12 09:13:24,810 Example #0\n", "2020-04-12 09:13:24,810 \tSource: Ìlú Áńtíókù , tá a ti kọ́kọ́ pe àwọn ọmọ ẹ̀yìn Jésù ní Kristẹni , ni Pọ́ọ̀lù ti bẹ̀rẹ̀ ìrìn àjò míṣọ́nnárì rẹ̀ àkọ́kọ́ .\n", "2020-04-12 09:13:24,811 \tReference: On his first missionary tour , he started from Antioch , where Jesus ’ followers were first called Christians .\n", "2020-04-12 09:13:24,811 \tHypothesis: Paul began his first missionary journey in Antioch , who was called Christians .\n", "2020-04-12 09:13:24,811 Example #1\n", "2020-04-12 09:13:24,811 \tSource: Torí pé mò ń tẹ̀ lé àwọn ìlànà Bíbélì ló jẹ́ kí n wà láàyè títí dòní\n", "2020-04-12 09:13:24,811 \tReference: I am alive today because of applying Bible principles\n", "2020-04-12 09:13:24,812 \tHypothesis: Because I live by Bible principles , I am living forever\n", "2020-04-12 09:13:24,812 Example #2\n", "2020-04-12 09:13:24,812 \tSource: Àjíǹde àkọ́kọ́ tí Bíbélì mẹ́nu kàn nìyẹn .\n", "2020-04-12 09:13:24,812 \tReference: That was the first resurrection of Bible record .\n", "2020-04-12 09:13:24,812 \tHypothesis: That is the first resurrection mentioned in the Bible .\n", "2020-04-12 09:13:24,812 Example #3\n", "2020-04-12 09:13:24,813 \tSource: Bíi ti afinimọ̀nà tá a mẹ́nu kàn nínú àpèjúwe wa yẹn , Jèhófà sọ pé òun máa ran àwọn tó bá fẹ́ bá òun rìn lọ́wọ́ , ó sì ní kí wọ́n wá bá òun dọ́rẹ̀ẹ́ .\n", "2020-04-12 09:13:24,813 \tReference: Like the guide in our illustration , Jehovah kindly extends his helping hand and his friendship to those who seek to walk with him .\n", "2020-04-12 09:13:24,813 \tHypothesis: Like the guidance mentioned in our illustration , Jehovah said that he would help those who want to walk with him and invite them to come to him .\n", "2020-04-12 09:13:24,813 Validation result (greedy) at epoch 1, step 282000: bleu: 30.15, loss: 40242.6914, ppl: 4.6223, duration: 13.6041s\n", "2020-04-12 09:13:35,787 Epoch 1 Step: 282100 Batch Loss: 1.632802 Tokens per Sec: 19749, Lr: 0.000025\n", "2020-04-12 09:13:42,850 Epoch 1: total training loss 8480.69\n", "2020-04-12 09:13:42,851 EPOCH 2\n", "2020-04-12 09:13:47,053 Epoch 2 Step: 282200 Batch Loss: 1.680374 Tokens per Sec: 17105, Lr: 0.000025\n", "2020-04-12 09:13:57,929 Epoch 2 Step: 282300 Batch Loss: 1.717307 Tokens per Sec: 19583, Lr: 0.000025\n", "2020-04-12 09:14:08,868 Epoch 2 Step: 282400 Batch Loss: 1.482708 Tokens per Sec: 19422, Lr: 0.000025\n", "2020-04-12 09:14:19,787 Epoch 2 Step: 282500 Batch Loss: 1.501216 Tokens per Sec: 19183, Lr: 0.000025\n", "2020-04-12 09:14:30,601 Epoch 2 Step: 282600 Batch Loss: 1.689887 Tokens per Sec: 19183, Lr: 0.000025\n", "2020-04-12 09:14:41,487 Epoch 2 Step: 282700 Batch Loss: 1.666531 Tokens per Sec: 19740, Lr: 0.000025\n", "2020-04-12 09:14:52,293 Epoch 2 Step: 282800 Batch Loss: 1.701214 Tokens per Sec: 19344, Lr: 0.000025\n", "2020-04-12 09:15:03,188 Epoch 2 Step: 282900 Batch Loss: 1.624024 Tokens per Sec: 19640, Lr: 0.000025\n", "2020-04-12 09:15:14,133 Epoch 2 Step: 283000 Batch Loss: 1.670008 Tokens per Sec: 19591, Lr: 0.000025\n", "2020-04-12 09:15:27,787 Example #0\n", "2020-04-12 09:15:27,788 \tSource: Ìlú Áńtíókù , tá a ti kọ́kọ́ pe àwọn ọmọ ẹ̀yìn Jésù ní Kristẹni , ni Pọ́ọ̀lù ti bẹ̀rẹ̀ ìrìn àjò míṣọ́nnárì rẹ̀ àkọ́kọ́ .\n", "2020-04-12 09:15:27,788 \tReference: On his first missionary tour , he started from Antioch , where Jesus ’ followers were first called Christians .\n", "2020-04-12 09:15:27,788 \tHypothesis: Paul began his first missionary journey in Antioch , who was called Christians .\n", "2020-04-12 09:15:27,789 Example #1\n", "2020-04-12 09:15:27,789 \tSource: Torí pé mò ń tẹ̀ lé àwọn ìlànà Bíbélì ló jẹ́ kí n wà láàyè títí dòní\n", "2020-04-12 09:15:27,789 \tReference: I am alive today because of applying Bible principles\n", "2020-04-12 09:15:27,789 \tHypothesis: Because I live by Bible principles , I am living forever\n", "2020-04-12 09:15:27,789 Example #2\n", "2020-04-12 09:15:27,789 \tSource: Àjíǹde àkọ́kọ́ tí Bíbélì mẹ́nu kàn nìyẹn .\n", "2020-04-12 09:15:27,790 \tReference: That was the first resurrection of Bible record .\n", "2020-04-12 09:15:27,790 \tHypothesis: That is the first resurrection mentioned in the Bible .\n", "2020-04-12 09:15:27,790 Example #3\n", "2020-04-12 09:15:27,790 \tSource: Bíi ti afinimọ̀nà tá a mẹ́nu kàn nínú àpèjúwe wa yẹn , Jèhófà sọ pé òun máa ran àwọn tó bá fẹ́ bá òun rìn lọ́wọ́ , ó sì ní kí wọ́n wá bá òun dọ́rẹ̀ẹ́ .\n", "2020-04-12 09:15:27,790 \tReference: Like the guide in our illustration , Jehovah kindly extends his helping hand and his friendship to those who seek to walk with him .\n", "2020-04-12 09:15:27,791 \tHypothesis: Like the guidance mentioned in our illustration , Jehovah said that he would help those who want to walk with him and invite them to come to him .\n", "2020-04-12 09:15:27,791 Validation result (greedy) at epoch 2, step 283000: bleu: 30.17, loss: 40270.6172, ppl: 4.6272, duration: 13.6569s\n", "2020-04-12 09:15:38,725 Epoch 2 Step: 283100 Batch Loss: 1.736698 Tokens per Sec: 19535, Lr: 0.000025\n", "2020-04-12 09:15:49,650 Epoch 2 Step: 283200 Batch Loss: 1.704589 Tokens per Sec: 19814, Lr: 0.000025\n", "2020-04-12 09:16:00,572 Epoch 2 Step: 283300 Batch Loss: 1.743042 Tokens per Sec: 20011, Lr: 0.000025\n", "2020-04-12 09:16:11,389 Epoch 2 Step: 283400 Batch Loss: 1.630345 Tokens per Sec: 19511, Lr: 0.000025\n", "2020-04-12 09:16:22,211 Epoch 2 Step: 283500 Batch Loss: 1.553275 Tokens per Sec: 19765, Lr: 0.000025\n", "2020-04-12 09:16:33,054 Epoch 2 Step: 283600 Batch Loss: 1.695509 Tokens per Sec: 19650, Lr: 0.000025\n", "2020-04-12 09:16:43,815 Epoch 2 Step: 283700 Batch Loss: 1.752427 Tokens per Sec: 19444, Lr: 0.000025\n", "2020-04-12 09:16:54,706 Epoch 2 Step: 283800 Batch Loss: 1.698369 Tokens per Sec: 20017, Lr: 0.000025\n", "2020-04-12 09:17:05,548 Epoch 2 Step: 283900 Batch Loss: 1.618877 Tokens per Sec: 19559, Lr: 0.000025\n", "2020-04-12 09:17:16,422 Epoch 2 Step: 284000 Batch Loss: 1.539809 Tokens per Sec: 19684, Lr: 0.000025\n", "2020-04-12 09:17:29,900 Example #0\n", "2020-04-12 09:17:29,901 \tSource: Ìlú Áńtíókù , tá a ti kọ́kọ́ pe àwọn ọmọ ẹ̀yìn Jésù ní Kristẹni , ni Pọ́ọ̀lù ti bẹ̀rẹ̀ ìrìn àjò míṣọ́nnárì rẹ̀ àkọ́kọ́ .\n", "2020-04-12 09:17:29,901 \tReference: On his first missionary tour , he started from Antioch , where Jesus ’ followers were first called Christians .\n", "2020-04-12 09:17:29,901 \tHypothesis: Paul began his first missionary journey in Antioch , who was called Christians .\n", "2020-04-12 09:17:29,901 Example #1\n", "2020-04-12 09:17:29,902 \tSource: Torí pé mò ń tẹ̀ lé àwọn ìlànà Bíbélì ló jẹ́ kí n wà láàyè títí dòní\n", "2020-04-12 09:17:29,902 \tReference: I am alive today because of applying Bible principles\n", "2020-04-12 09:17:29,902 \tHypothesis: Because I live by Bible principles , I am living forever\n", "2020-04-12 09:17:29,902 Example #2\n", "2020-04-12 09:17:29,902 \tSource: Àjíǹde àkọ́kọ́ tí Bíbélì mẹ́nu kàn nìyẹn .\n", "2020-04-12 09:17:29,903 \tReference: That was the first resurrection of Bible record .\n", "2020-04-12 09:17:29,903 \tHypothesis: That is the first resurrection mentioned in the Bible .\n", "2020-04-12 09:17:29,903 Example #3\n", "2020-04-12 09:17:29,903 \tSource: Bíi ti afinimọ̀nà tá a mẹ́nu kàn nínú àpèjúwe wa yẹn , Jèhófà sọ pé òun máa ran àwọn tó bá fẹ́ bá òun rìn lọ́wọ́ , ó sì ní kí wọ́n wá bá òun dọ́rẹ̀ẹ́ .\n", "2020-04-12 09:17:29,903 \tReference: Like the guide in our illustration , Jehovah kindly extends his helping hand and his friendship to those who seek to walk with him .\n", "2020-04-12 09:17:29,903 \tHypothesis: Like the guidance mentioned in our illustration , Jehovah said that he would help those who want to walk with him and invite them to come to him .\n", "2020-04-12 09:17:29,904 Validation result (greedy) at epoch 2, step 284000: bleu: 30.15, loss: 40262.3477, ppl: 4.6258, duration: 13.4813s\n", "2020-04-12 09:17:40,847 Epoch 2 Step: 284100 Batch Loss: 1.476657 Tokens per Sec: 19692, Lr: 0.000025\n", "2020-04-12 09:17:51,656 Epoch 2 Step: 284200 Batch Loss: 1.757385 Tokens per Sec: 19604, Lr: 0.000025\n", "2020-04-12 09:18:02,606 Epoch 2 Step: 284300 Batch Loss: 1.659088 Tokens per Sec: 19724, Lr: 0.000025\n", "2020-04-12 09:18:13,502 Epoch 2 Step: 284400 Batch Loss: 1.727490 Tokens per Sec: 19611, Lr: 0.000025\n", "2020-04-12 09:18:24,453 Epoch 2 Step: 284500 Batch Loss: 1.645569 Tokens per Sec: 19878, Lr: 0.000025\n", "2020-04-12 09:18:35,381 Epoch 2 Step: 284600 Batch Loss: 1.652842 Tokens per Sec: 20305, Lr: 0.000025\n", "2020-04-12 09:18:46,381 Epoch 2 Step: 284700 Batch Loss: 1.701704 Tokens per Sec: 20068, Lr: 0.000025\n", "2020-04-12 09:18:57,287 Epoch 2 Step: 284800 Batch Loss: 1.530702 Tokens per Sec: 20082, Lr: 0.000025\n", "2020-04-12 09:19:08,183 Epoch 2 Step: 284900 Batch Loss: 1.518074 Tokens per Sec: 19456, Lr: 0.000025\n", "2020-04-12 09:19:19,087 Epoch 2 Step: 285000 Batch Loss: 1.464507 Tokens per Sec: 19363, Lr: 0.000025\n", "2020-04-12 09:19:32,694 Example #0\n", "2020-04-12 09:19:32,695 \tSource: Ìlú Áńtíókù , tá a ti kọ́kọ́ pe àwọn ọmọ ẹ̀yìn Jésù ní Kristẹni , ni Pọ́ọ̀lù ti bẹ̀rẹ̀ ìrìn àjò míṣọ́nnárì rẹ̀ àkọ́kọ́ .\n", "2020-04-12 09:19:32,695 \tReference: On his first missionary tour , he started from Antioch , where Jesus ’ followers were first called Christians .\n", "2020-04-12 09:19:32,695 \tHypothesis: Paul began his first missionary journey in Antioch , who was called Christians .\n", "2020-04-12 09:19:32,696 Example #1\n", "2020-04-12 09:19:32,696 \tSource: Torí pé mò ń tẹ̀ lé àwọn ìlànà Bíbélì ló jẹ́ kí n wà láàyè títí dòní\n", "2020-04-12 09:19:32,696 \tReference: I am alive today because of applying Bible principles\n", "2020-04-12 09:19:32,696 \tHypothesis: Because I live by Bible principles , I am living forever\n", "2020-04-12 09:19:32,696 Example #2\n", "2020-04-12 09:19:32,697 \tSource: Àjíǹde àkọ́kọ́ tí Bíbélì mẹ́nu kàn nìyẹn .\n", "2020-04-12 09:19:32,697 \tReference: That was the first resurrection of Bible record .\n", "2020-04-12 09:19:32,697 \tHypothesis: That is the first resurrection mentioned in the Bible .\n", "2020-04-12 09:19:32,697 Example #3\n", "2020-04-12 09:19:32,698 \tSource: Bíi ti afinimọ̀nà tá a mẹ́nu kàn nínú àpèjúwe wa yẹn , Jèhófà sọ pé òun máa ran àwọn tó bá fẹ́ bá òun rìn lọ́wọ́ , ó sì ní kí wọ́n wá bá òun dọ́rẹ̀ẹ́ .\n", "2020-04-12 09:19:32,698 \tReference: Like the guide in our illustration , Jehovah kindly extends his helping hand and his friendship to those who seek to walk with him .\n", "2020-04-12 09:19:32,698 \tHypothesis: Like the guidance mentioned in our illustration , Jehovah said that he would help those who want to walk with him and invite them to come to him .\n", "2020-04-12 09:19:32,698 Validation result (greedy) at epoch 2, step 285000: bleu: 30.10, loss: 40268.4805, ppl: 4.6269, duration: 13.6108s\n", "2020-04-12 09:19:43,579 Epoch 2 Step: 285100 Batch Loss: 1.663775 Tokens per Sec: 19608, Lr: 0.000025\n", "2020-04-12 09:19:54,350 Epoch 2 Step: 285200 Batch Loss: 1.735461 Tokens per Sec: 19596, Lr: 0.000025\n", "2020-04-12 09:20:05,136 Epoch 2 Step: 285300 Batch Loss: 1.755312 Tokens per Sec: 19666, Lr: 0.000025\n", "2020-04-12 09:20:16,084 Epoch 2 Step: 285400 Batch Loss: 1.750107 Tokens per Sec: 19880, Lr: 0.000025\n", "2020-04-12 09:20:27,015 Epoch 2 Step: 285500 Batch Loss: 1.648682 Tokens per Sec: 19578, Lr: 0.000025\n", "2020-04-12 09:20:37,953 Epoch 2 Step: 285600 Batch Loss: 1.716950 Tokens per Sec: 19760, Lr: 0.000025\n", "2020-04-12 09:20:48,826 Epoch 2 Step: 285700 Batch Loss: 1.743062 Tokens per Sec: 19417, Lr: 0.000025\n", "2020-04-12 09:20:59,696 Epoch 2 Step: 285800 Batch Loss: 1.821899 Tokens per Sec: 19527, Lr: 0.000025\n", "2020-04-12 09:21:10,534 Epoch 2 Step: 285900 Batch Loss: 1.628103 Tokens per Sec: 19814, Lr: 0.000025\n", "2020-04-12 09:21:21,345 Epoch 2 Step: 286000 Batch Loss: 1.576982 Tokens per Sec: 19278, Lr: 0.000025\n", "2020-04-12 09:21:35,477 Example #0\n", "2020-04-12 09:21:35,477 \tSource: Ìlú Áńtíókù , tá a ti kọ́kọ́ pe àwọn ọmọ ẹ̀yìn Jésù ní Kristẹni , ni Pọ́ọ̀lù ti bẹ̀rẹ̀ ìrìn àjò míṣọ́nnárì rẹ̀ àkọ́kọ́ .\n", "2020-04-12 09:21:35,478 \tReference: On his first missionary tour , he started from Antioch , where Jesus ’ followers were first called Christians .\n", "2020-04-12 09:21:35,478 \tHypothesis: Paul began his first missionary journey in Antioch , who was called Christians .\n", "2020-04-12 09:21:35,478 Example #1\n", "2020-04-12 09:21:35,478 \tSource: Torí pé mò ń tẹ̀ lé àwọn ìlànà Bíbélì ló jẹ́ kí n wà láàyè títí dòní\n", "2020-04-12 09:21:35,479 \tReference: I am alive today because of applying Bible principles\n", "2020-04-12 09:21:35,479 \tHypothesis: Because I live by Bible principles , I am living forever\n", "2020-04-12 09:21:35,479 Example #2\n", "2020-04-12 09:21:35,479 \tSource: Àjíǹde àkọ́kọ́ tí Bíbélì mẹ́nu kàn nìyẹn .\n", "2020-04-12 09:21:35,480 \tReference: That was the first resurrection of Bible record .\n", "2020-04-12 09:21:35,480 \tHypothesis: That is the first resurrection mentioned in the Bible .\n", "2020-04-12 09:21:35,480 Example #3\n", "2020-04-12 09:21:35,480 \tSource: Bíi ti afinimọ̀nà tá a mẹ́nu kàn nínú àpèjúwe wa yẹn , Jèhófà sọ pé òun máa ran àwọn tó bá fẹ́ bá òun rìn lọ́wọ́ , ó sì ní kí wọ́n wá bá òun dọ́rẹ̀ẹ́ .\n", "2020-04-12 09:21:35,481 \tReference: Like the guide in our illustration , Jehovah kindly extends his helping hand and his friendship to those who seek to walk with him .\n", "2020-04-12 09:21:35,481 \tHypothesis: Like the guidance mentioned in our illustration , Jehovah said that he would help those who want to walk with him and invite them to come to him .\n", "2020-04-12 09:21:35,481 Validation result (greedy) at epoch 2, step 286000: bleu: 30.22, loss: 40247.5000, ppl: 4.6232, duration: 14.1354s\n", "2020-04-12 09:21:46,299 Epoch 2 Step: 286100 Batch Loss: 1.692235 Tokens per Sec: 19513, Lr: 0.000025\n", "2020-04-12 09:21:57,126 Epoch 2 Step: 286200 Batch Loss: 1.440811 Tokens per Sec: 19416, Lr: 0.000025\n", "2020-04-12 09:22:07,917 Epoch 2 Step: 286300 Batch Loss: 1.697176 Tokens per Sec: 19637, Lr: 0.000025\n", "2020-04-12 09:22:18,889 Epoch 2 Step: 286400 Batch Loss: 1.559660 Tokens per Sec: 20022, Lr: 0.000025\n", "2020-04-12 09:22:29,741 Epoch 2 Step: 286500 Batch Loss: 1.683756 Tokens per Sec: 19759, Lr: 0.000025\n", "2020-04-12 09:22:40,565 Epoch 2 Step: 286600 Batch Loss: 1.631120 Tokens per Sec: 19467, Lr: 0.000025\n", "2020-04-12 09:22:51,386 Epoch 2 Step: 286700 Batch Loss: 1.453831 Tokens per Sec: 19715, Lr: 0.000025\n", "2020-04-12 09:23:02,124 Epoch 2 Step: 286800 Batch Loss: 1.732887 Tokens per Sec: 19613, Lr: 0.000025\n", "2020-04-12 09:23:13,067 Epoch 2 Step: 286900 Batch Loss: 1.627965 Tokens per Sec: 20121, Lr: 0.000025\n", "2020-04-12 09:23:23,900 Epoch 2 Step: 287000 Batch Loss: 1.877879 Tokens per Sec: 19716, Lr: 0.000025\n", "2020-04-12 09:23:37,577 Example #0\n", "2020-04-12 09:23:37,578 \tSource: Ìlú Áńtíókù , tá a ti kọ́kọ́ pe àwọn ọmọ ẹ̀yìn Jésù ní Kristẹni , ni Pọ́ọ̀lù ti bẹ̀rẹ̀ ìrìn àjò míṣọ́nnárì rẹ̀ àkọ́kọ́ .\n", "2020-04-12 09:23:37,578 \tReference: On his first missionary tour , he started from Antioch , where Jesus ’ followers were first called Christians .\n", "2020-04-12 09:23:37,578 \tHypothesis: Paul began his first missionary journey in Antioch , who was called Christians .\n", "2020-04-12 09:23:37,578 Example #1\n", "2020-04-12 09:23:37,578 \tSource: Torí pé mò ń tẹ̀ lé àwọn ìlànà Bíbélì ló jẹ́ kí n wà láàyè títí dòní\n", "2020-04-12 09:23:37,579 \tReference: I am alive today because of applying Bible principles\n", "2020-04-12 09:23:37,579 \tHypothesis: Because I live by Bible principles , I am living forever\n", "2020-04-12 09:23:37,579 Example #2\n", "2020-04-12 09:23:37,579 \tSource: Àjíǹde àkọ́kọ́ tí Bíbélì mẹ́nu kàn nìyẹn .\n", "2020-04-12 09:23:37,579 \tReference: That was the first resurrection of Bible record .\n", "2020-04-12 09:23:37,579 \tHypothesis: That is the first resurrection mentioned in the Bible .\n", "2020-04-12 09:23:37,579 Example #3\n", "2020-04-12 09:23:37,580 \tSource: Bíi ti afinimọ̀nà tá a mẹ́nu kàn nínú àpèjúwe wa yẹn , Jèhófà sọ pé òun máa ran àwọn tó bá fẹ́ bá òun rìn lọ́wọ́ , ó sì ní kí wọ́n wá bá òun dọ́rẹ̀ẹ́ .\n", "2020-04-12 09:23:37,580 \tReference: Like the guide in our illustration , Jehovah kindly extends his helping hand and his friendship to those who seek to walk with him .\n", "2020-04-12 09:23:37,580 \tHypothesis: Like the guidance mentioned in our illustration , Jehovah said that he would help those who want to walk with him and invite them to come to him .\n", "2020-04-12 09:23:37,580 Validation result (greedy) at epoch 2, step 287000: bleu: 30.37, loss: 40190.9922, ppl: 4.6132, duration: 13.6796s\n", "2020-04-12 09:23:48,476 Epoch 2 Step: 287100 Batch Loss: 1.626933 Tokens per Sec: 19791, Lr: 0.000017\n", "2020-04-12 09:23:59,219 Epoch 2 Step: 287200 Batch Loss: 1.665654 Tokens per Sec: 19360, Lr: 0.000017\n", "2020-04-12 09:24:10,177 Epoch 2 Step: 287300 Batch Loss: 1.890874 Tokens per Sec: 20315, Lr: 0.000017\n", "2020-04-12 09:24:13,383 Epoch 2: total training loss 8488.34\n", "2020-04-12 09:24:13,384 EPOCH 3\n", "2020-04-12 09:24:21,465 Epoch 3 Step: 287400 Batch Loss: 1.696484 Tokens per Sec: 18484, Lr: 0.000017\n", "2020-04-12 09:24:32,393 Epoch 3 Step: 287500 Batch Loss: 1.722618 Tokens per Sec: 18876, Lr: 0.000017\n", "2020-04-12 09:24:43,219 Epoch 3 Step: 287600 Batch Loss: 1.639851 Tokens per Sec: 19766, Lr: 0.000017\n", "2020-04-12 09:24:54,082 Epoch 3 Step: 287700 Batch Loss: 1.763196 Tokens per Sec: 19598, Lr: 0.000017\n", "2020-04-12 09:25:04,855 Epoch 3 Step: 287800 Batch Loss: 1.572142 Tokens per Sec: 19514, Lr: 0.000017\n", "2020-04-12 09:25:15,647 Epoch 3 Step: 287900 Batch Loss: 1.598227 Tokens per Sec: 19677, Lr: 0.000017\n", "2020-04-12 09:25:26,457 Epoch 3 Step: 288000 Batch Loss: 1.744567 Tokens per Sec: 19623, Lr: 0.000017\n", "2020-04-12 09:25:40,034 Example #0\n", "2020-04-12 09:25:40,034 \tSource: Ìlú Áńtíókù , tá a ti kọ́kọ́ pe àwọn ọmọ ẹ̀yìn Jésù ní Kristẹni , ni Pọ́ọ̀lù ti bẹ̀rẹ̀ ìrìn àjò míṣọ́nnárì rẹ̀ àkọ́kọ́ .\n", "2020-04-12 09:25:40,034 \tReference: On his first missionary tour , he started from Antioch , where Jesus ’ followers were first called Christians .\n", "2020-04-12 09:25:40,034 \tHypothesis: Paul began his first missionary journey in Antioch , who was called Christians .\n", "2020-04-12 09:25:40,035 Example #1\n", "2020-04-12 09:25:40,035 \tSource: Torí pé mò ń tẹ̀ lé àwọn ìlànà Bíbélì ló jẹ́ kí n wà láàyè títí dòní\n", "2020-04-12 09:25:40,035 \tReference: I am alive today because of applying Bible principles\n", "2020-04-12 09:25:40,035 \tHypothesis: Because I live by Bible principles , I am living forever\n", "2020-04-12 09:25:40,035 Example #2\n", "2020-04-12 09:25:40,036 \tSource: Àjíǹde àkọ́kọ́ tí Bíbélì mẹ́nu kàn nìyẹn .\n", "2020-04-12 09:25:40,036 \tReference: That was the first resurrection of Bible record .\n", "2020-04-12 09:25:40,036 \tHypothesis: That is the first resurrection mentioned in the Bible .\n", "2020-04-12 09:25:40,036 Example #3\n", "2020-04-12 09:25:40,036 \tSource: Bíi ti afinimọ̀nà tá a mẹ́nu kàn nínú àpèjúwe wa yẹn , Jèhófà sọ pé òun máa ran àwọn tó bá fẹ́ bá òun rìn lọ́wọ́ , ó sì ní kí wọ́n wá bá òun dọ́rẹ̀ẹ́ .\n", "2020-04-12 09:25:40,036 \tReference: Like the guide in our illustration , Jehovah kindly extends his helping hand and his friendship to those who seek to walk with him .\n", "2020-04-12 09:25:40,037 \tHypothesis: Like the guidance mentioned in our illustration , Jehovah said that he would help those who want to walk with him and invite them to come to him .\n", "2020-04-12 09:25:40,037 Validation result (greedy) at epoch 3, step 288000: bleu: 30.30, loss: 40156.4688, ppl: 4.6072, duration: 13.5791s\n", "2020-04-12 09:25:51,099 Epoch 3 Step: 288100 Batch Loss: 1.660825 Tokens per Sec: 20039, Lr: 0.000017\n", "2020-04-12 09:26:01,954 Epoch 3 Step: 288200 Batch Loss: 1.747323 Tokens per Sec: 19528, Lr: 0.000017\n", "2020-04-12 09:26:12,946 Epoch 3 Step: 288300 Batch Loss: 1.645363 Tokens per Sec: 19591, Lr: 0.000017\n", "2020-04-12 09:26:23,918 Epoch 3 Step: 288400 Batch Loss: 1.543887 Tokens per Sec: 19724, Lr: 0.000017\n", "2020-04-12 09:26:34,754 Epoch 3 Step: 288500 Batch Loss: 1.681951 Tokens per Sec: 19397, Lr: 0.000017\n", "2020-04-12 09:26:45,670 Epoch 3 Step: 288600 Batch Loss: 1.673341 Tokens per Sec: 20186, Lr: 0.000017\n", "2020-04-12 09:26:56,531 Epoch 3 Step: 288700 Batch Loss: 1.652655 Tokens per Sec: 19406, Lr: 0.000017\n", "2020-04-12 09:27:07,506 Epoch 3 Step: 288800 Batch Loss: 1.549620 Tokens per Sec: 19631, Lr: 0.000017\n", "2020-04-12 09:27:18,267 Epoch 3 Step: 288900 Batch Loss: 1.795976 Tokens per Sec: 18973, Lr: 0.000017\n", "2020-04-12 09:27:29,206 Epoch 3 Step: 289000 Batch Loss: 1.570788 Tokens per Sec: 19528, Lr: 0.000017\n", "2020-04-12 09:27:43,096 Example #0\n", "2020-04-12 09:27:43,096 \tSource: Ìlú Áńtíókù , tá a ti kọ́kọ́ pe àwọn ọmọ ẹ̀yìn Jésù ní Kristẹni , ni Pọ́ọ̀lù ti bẹ̀rẹ̀ ìrìn àjò míṣọ́nnárì rẹ̀ àkọ́kọ́ .\n", "2020-04-12 09:27:43,096 \tReference: On his first missionary tour , he started from Antioch , where Jesus ’ followers were first called Christians .\n", "2020-04-12 09:27:43,097 \tHypothesis: Paul began his first missionary journey in Antioch , who was called Christians .\n", "2020-04-12 09:27:43,097 Example #1\n", "2020-04-12 09:27:43,097 \tSource: Torí pé mò ń tẹ̀ lé àwọn ìlànà Bíbélì ló jẹ́ kí n wà láàyè títí dòní\n", "2020-04-12 09:27:43,097 \tReference: I am alive today because of applying Bible principles\n", "2020-04-12 09:27:43,097 \tHypothesis: Because I live by Bible principles , I am living forever\n", "2020-04-12 09:27:43,097 Example #2\n", "2020-04-12 09:27:43,098 \tSource: Àjíǹde àkọ́kọ́ tí Bíbélì mẹ́nu kàn nìyẹn .\n", "2020-04-12 09:27:43,098 \tReference: That was the first resurrection of Bible record .\n", "2020-04-12 09:27:43,098 \tHypothesis: That is the first resurrection mentioned in the Bible .\n", "2020-04-12 09:27:43,098 Example #3\n", "2020-04-12 09:27:43,098 \tSource: Bíi ti afinimọ̀nà tá a mẹ́nu kàn nínú àpèjúwe wa yẹn , Jèhófà sọ pé òun máa ran àwọn tó bá fẹ́ bá òun rìn lọ́wọ́ , ó sì ní kí wọ́n wá bá òun dọ́rẹ̀ẹ́ .\n", "2020-04-12 09:27:43,099 \tReference: Like the guide in our illustration , Jehovah kindly extends his helping hand and his friendship to those who seek to walk with him .\n", "2020-04-12 09:27:43,099 \tHypothesis: Like the guidance mentioned in our illustration , Jehovah said that he would help those who want to walk with him and invite them to come to him .\n", "2020-04-12 09:27:43,099 Validation result (greedy) at epoch 3, step 289000: bleu: 30.37, loss: 40172.6211, ppl: 4.6100, duration: 13.8925s\n", "2020-04-12 09:27:53,971 Epoch 3 Step: 289100 Batch Loss: 1.794677 Tokens per Sec: 19751, Lr: 0.000017\n", "2020-04-12 09:28:04,723 Epoch 3 Step: 289200 Batch Loss: 1.652584 Tokens per Sec: 19732, Lr: 0.000017\n", "2020-04-12 09:28:15,595 Epoch 3 Step: 289300 Batch Loss: 1.638702 Tokens per Sec: 19468, Lr: 0.000017\n", "2020-04-12 09:28:26,406 Epoch 3 Step: 289400 Batch Loss: 1.933476 Tokens per Sec: 19228, Lr: 0.000017\n", "2020-04-12 09:28:37,304 Epoch 3 Step: 289500 Batch Loss: 1.590607 Tokens per Sec: 19636, Lr: 0.000017\n", "2020-04-12 09:28:48,013 Epoch 3 Step: 289600 Batch Loss: 1.507041 Tokens per Sec: 19263, Lr: 0.000017\n", "2020-04-12 09:28:58,911 Epoch 3 Step: 289700 Batch Loss: 1.562050 Tokens per Sec: 19700, Lr: 0.000017\n", "2020-04-12 09:29:09,844 Epoch 3 Step: 289800 Batch Loss: 1.598395 Tokens per Sec: 19661, Lr: 0.000017\n", "2020-04-12 09:29:20,595 Epoch 3 Step: 289900 Batch Loss: 1.668182 Tokens per Sec: 19187, Lr: 0.000017\n", "2020-04-12 09:29:31,411 Epoch 3 Step: 290000 Batch Loss: 1.698798 Tokens per Sec: 19508, Lr: 0.000017\n", "2020-04-12 09:29:45,093 Example #0\n", "2020-04-12 09:29:45,094 \tSource: Ìlú Áńtíókù , tá a ti kọ́kọ́ pe àwọn ọmọ ẹ̀yìn Jésù ní Kristẹni , ni Pọ́ọ̀lù ti bẹ̀rẹ̀ ìrìn àjò míṣọ́nnárì rẹ̀ àkọ́kọ́ .\n", "2020-04-12 09:29:45,094 \tReference: On his first missionary tour , he started from Antioch , where Jesus ’ followers were first called Christians .\n", "2020-04-12 09:29:45,094 \tHypothesis: Paul began his first missionary journey in Antioch , who was called Christians .\n", "2020-04-12 09:29:45,094 Example #1\n", "2020-04-12 09:29:45,095 \tSource: Torí pé mò ń tẹ̀ lé àwọn ìlànà Bíbélì ló jẹ́ kí n wà láàyè títí dòní\n", "2020-04-12 09:29:45,095 \tReference: I am alive today because of applying Bible principles\n", "2020-04-12 09:29:45,095 \tHypothesis: Because I live by Bible principles , I am living forever\n", "2020-04-12 09:29:45,095 Example #2\n", "2020-04-12 09:29:45,095 \tSource: Àjíǹde àkọ́kọ́ tí Bíbélì mẹ́nu kàn nìyẹn .\n", "2020-04-12 09:29:45,095 \tReference: That was the first resurrection of Bible record .\n", "2020-04-12 09:29:45,096 \tHypothesis: That is the first resurrection mentioned in the Bible .\n", "2020-04-12 09:29:45,096 Example #3\n", "2020-04-12 09:29:45,096 \tSource: Bíi ti afinimọ̀nà tá a mẹ́nu kàn nínú àpèjúwe wa yẹn , Jèhófà sọ pé òun máa ran àwọn tó bá fẹ́ bá òun rìn lọ́wọ́ , ó sì ní kí wọ́n wá bá òun dọ́rẹ̀ẹ́ .\n", "2020-04-12 09:29:45,096 \tReference: Like the guide in our illustration , Jehovah kindly extends his helping hand and his friendship to those who seek to walk with him .\n", "2020-04-12 09:29:45,096 \tHypothesis: Like the guidance mentioned in our illustration , Jehovah said that he would help those who want to walk with him and invite them to come to him .\n", "2020-04-12 09:29:45,096 Validation result (greedy) at epoch 3, step 290000: bleu: 30.33, loss: 40156.4648, ppl: 4.6072, duration: 13.6853s\n", "2020-04-12 09:29:56,028 Epoch 3 Step: 290100 Batch Loss: 1.415543 Tokens per Sec: 19797, Lr: 0.000017\n", "2020-04-12 09:30:06,941 Epoch 3 Step: 290200 Batch Loss: 1.718781 Tokens per Sec: 19848, Lr: 0.000017\n", "2020-04-12 09:30:17,719 Epoch 3 Step: 290300 Batch Loss: 1.533010 Tokens per Sec: 19054, Lr: 0.000017\n", "2020-04-12 09:30:28,586 Epoch 3 Step: 290400 Batch Loss: 1.634016 Tokens per Sec: 19604, Lr: 0.000017\n", "2020-04-12 09:30:39,618 Epoch 3 Step: 290500 Batch Loss: 1.790158 Tokens per Sec: 20068, Lr: 0.000017\n", "2020-04-12 09:30:50,469 Epoch 3 Step: 290600 Batch Loss: 1.602235 Tokens per Sec: 19471, Lr: 0.000017\n", "2020-04-12 09:31:01,220 Epoch 3 Step: 290700 Batch Loss: 1.686220 Tokens per Sec: 19215, Lr: 0.000017\n", "2020-04-12 09:31:12,111 Epoch 3 Step: 290800 Batch Loss: 1.766415 Tokens per Sec: 19829, Lr: 0.000017\n", "2020-04-12 09:31:23,048 Epoch 3 Step: 290900 Batch Loss: 1.562175 Tokens per Sec: 20141, Lr: 0.000017\n", "2020-04-12 09:31:33,974 Epoch 3 Step: 291000 Batch Loss: 1.788393 Tokens per Sec: 19885, Lr: 0.000017\n", "2020-04-12 09:31:47,586 Example #0\n", "2020-04-12 09:31:47,587 \tSource: Ìlú Áńtíókù , tá a ti kọ́kọ́ pe àwọn ọmọ ẹ̀yìn Jésù ní Kristẹni , ni Pọ́ọ̀lù ti bẹ̀rẹ̀ ìrìn àjò míṣọ́nnárì rẹ̀ àkọ́kọ́ .\n", "2020-04-12 09:31:47,588 \tReference: On his first missionary tour , he started from Antioch , where Jesus ’ followers were first called Christians .\n", "2020-04-12 09:31:47,588 \tHypothesis: Paul began his first missionary journey in Antioch , who was called Christians .\n", "2020-04-12 09:31:47,588 Example #1\n", "2020-04-12 09:31:47,588 \tSource: Torí pé mò ń tẹ̀ lé àwọn ìlànà Bíbélì ló jẹ́ kí n wà láàyè títí dòní\n", "2020-04-12 09:31:47,589 \tReference: I am alive today because of applying Bible principles\n", "2020-04-12 09:31:47,589 \tHypothesis: Because I live by Bible principles , I am living forever\n", "2020-04-12 09:31:47,589 Example #2\n", "2020-04-12 09:31:47,589 \tSource: Àjíǹde àkọ́kọ́ tí Bíbélì mẹ́nu kàn nìyẹn .\n", "2020-04-12 09:31:47,589 \tReference: That was the first resurrection of Bible record .\n", "2020-04-12 09:31:47,590 \tHypothesis: That is the first resurrection mentioned in the Bible .\n", "2020-04-12 09:31:47,590 Example #3\n", "2020-04-12 09:31:47,590 \tSource: Bíi ti afinimọ̀nà tá a mẹ́nu kàn nínú àpèjúwe wa yẹn , Jèhófà sọ pé òun máa ran àwọn tó bá fẹ́ bá òun rìn lọ́wọ́ , ó sì ní kí wọ́n wá bá òun dọ́rẹ̀ẹ́ .\n", "2020-04-12 09:31:47,590 \tReference: Like the guide in our illustration , Jehovah kindly extends his helping hand and his friendship to those who seek to walk with him .\n", "2020-04-12 09:31:47,591 \tHypothesis: Like the guidance mentioned in our illustration , Jehovah said that he would help those who want to walk with him and invite them to come to him .\n", "2020-04-12 09:31:47,591 Validation result (greedy) at epoch 3, step 291000: bleu: 30.31, loss: 40228.3828, ppl: 4.6198, duration: 13.6163s\n", "2020-04-12 09:31:58,457 Epoch 3 Step: 291100 Batch Loss: 1.845632 Tokens per Sec: 19679, Lr: 0.000017\n", "2020-04-12 09:32:09,328 Epoch 3 Step: 291200 Batch Loss: 1.406895 Tokens per Sec: 19895, Lr: 0.000017\n", "2020-04-12 09:32:20,246 Epoch 3 Step: 291300 Batch Loss: 1.526848 Tokens per Sec: 19798, Lr: 0.000017\n", "2020-04-12 09:32:31,147 Epoch 3 Step: 291400 Batch Loss: 1.601239 Tokens per Sec: 19685, Lr: 0.000017\n", "2020-04-12 09:32:42,021 Epoch 3 Step: 291500 Batch Loss: 1.116311 Tokens per Sec: 19871, Lr: 0.000017\n", "2020-04-12 09:32:52,747 Epoch 3 Step: 291600 Batch Loss: 1.562449 Tokens per Sec: 19620, Lr: 0.000017\n", "2020-04-12 09:33:03,646 Epoch 3 Step: 291700 Batch Loss: 1.849396 Tokens per Sec: 19537, Lr: 0.000017\n", "2020-04-12 09:33:14,445 Epoch 3 Step: 291800 Batch Loss: 1.544322 Tokens per Sec: 19525, Lr: 0.000017\n", "2020-04-12 09:33:25,357 Epoch 3 Step: 291900 Batch Loss: 1.650945 Tokens per Sec: 19945, Lr: 0.000017\n", "2020-04-12 09:33:36,147 Epoch 3 Step: 292000 Batch Loss: 1.775356 Tokens per Sec: 19636, Lr: 0.000017\n", "2020-04-12 09:33:49,680 Example #0\n", "2020-04-12 09:33:49,681 \tSource: Ìlú Áńtíókù , tá a ti kọ́kọ́ pe àwọn ọmọ ẹ̀yìn Jésù ní Kristẹni , ni Pọ́ọ̀lù ti bẹ̀rẹ̀ ìrìn àjò míṣọ́nnárì rẹ̀ àkọ́kọ́ .\n", "2020-04-12 09:33:49,681 \tReference: On his first missionary tour , he started from Antioch , where Jesus ’ followers were first called Christians .\n", "2020-04-12 09:33:49,681 \tHypothesis: Paul began his first missionary journey in Antioch , who was called Christians .\n", "2020-04-12 09:33:49,682 Example #1\n", "2020-04-12 09:33:49,682 \tSource: Torí pé mò ń tẹ̀ lé àwọn ìlànà Bíbélì ló jẹ́ kí n wà láàyè títí dòní\n", "2020-04-12 09:33:49,682 \tReference: I am alive today because of applying Bible principles\n", "2020-04-12 09:33:49,682 \tHypothesis: Because I live by Bible principles , I am living forever\n", "2020-04-12 09:33:49,682 Example #2\n", "2020-04-12 09:33:49,683 \tSource: Àjíǹde àkọ́kọ́ tí Bíbélì mẹ́nu kàn nìyẹn .\n", "2020-04-12 09:33:49,683 \tReference: That was the first resurrection of Bible record .\n", "2020-04-12 09:33:49,683 \tHypothesis: That is the first resurrection mentioned in the Bible .\n", "2020-04-12 09:33:49,683 Example #3\n", "2020-04-12 09:33:49,683 \tSource: Bíi ti afinimọ̀nà tá a mẹ́nu kàn nínú àpèjúwe wa yẹn , Jèhófà sọ pé òun máa ran àwọn tó bá fẹ́ bá òun rìn lọ́wọ́ , ó sì ní kí wọ́n wá bá òun dọ́rẹ̀ẹ́ .\n", "2020-04-12 09:33:49,683 \tReference: Like the guide in our illustration , Jehovah kindly extends his helping hand and his friendship to those who seek to walk with him .\n", "2020-04-12 09:33:49,684 \tHypothesis: Like the guidance mentioned in our illustration , Jehovah said that he would help those who want to walk with him and invite them to come to him .\n", "2020-04-12 09:33:49,684 Validation result (greedy) at epoch 3, step 292000: bleu: 30.36, loss: 40207.8125, ppl: 4.6162, duration: 13.5363s\n", "2020-04-12 09:34:00,651 Epoch 3 Step: 292100 Batch Loss: 1.552380 Tokens per Sec: 20035, Lr: 0.000017\n", "2020-04-12 09:34:11,455 Epoch 3 Step: 292200 Batch Loss: 1.723981 Tokens per Sec: 19631, Lr: 0.000017\n", "2020-04-12 09:34:22,352 Epoch 3 Step: 292300 Batch Loss: 1.403687 Tokens per Sec: 19537, Lr: 0.000017\n", "2020-04-12 09:34:33,199 Epoch 3 Step: 292400 Batch Loss: 1.027873 Tokens per Sec: 19462, Lr: 0.000017\n", "2020-04-12 09:34:44,130 Epoch 3 Step: 292500 Batch Loss: 1.547408 Tokens per Sec: 19672, Lr: 0.000017\n", "2020-04-12 09:34:44,888 Epoch 3: total training loss 8529.87\n", "2020-04-12 09:34:44,888 Training ended after 3 epochs.\n", "2020-04-12 09:34:44,889 Best validation result (greedy) at step 277000: 4.60 ppl.\n", "2020-04-12 09:35:19,356 dev bleu: 30.48 [Beam search decoding with beam size = 5 and alpha = 1.0]\n", "2020-04-12 09:35:19,360 Translations saved to: /content/drive/My Drive/masakhane/model-temp/00277000.hyps.dev\n", "2020-04-12 09:35:51,468 test bleu: 39.44 [Beam search decoding with beam size = 5 and alpha = 1.0]\n", "2020-04-12 09:35:51,473 Translations saved to: /content/drive/My Drive/masakhane/model-temp/00277000.hyps.test\n" ], "name": "stdout" } ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "MBoDS09JM807", "colab": {} }, "source": [ "# Copy the created models from the temporary storage to main storage on google drive for persistant storage \n", "!cp -r \"/content/drive/My Drive/masakhane/model-temp/\"* \"$gdrive_path/models/${src}${tgt}_transformer/\"" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "n94wlrCjVc17", "outputId": "6c571513-81dd-499d-de70-b7995b7a1250", "colab": { "base_uri": "https://localhost:8080/", "height": 287 } }, "source": [ "# Output our validation accuracy\n", "! cat \"$gdrive_path/models/${src}${tgt}_transformer/validations.txt\"\n" ], "execution_count": 11, "outputs": [ { "output_type": "stream", "text": [ "Steps: 278000\tLoss: 40294.09375\tPPL: 4.63137\tbleu: 30.09912\tLR: 0.00003529\t\n", "Steps: 279000\tLoss: 40277.57422\tPPL: 4.62846\tbleu: 30.26302\tLR: 0.00003529\t\n", "Steps: 280000\tLoss: 40215.38281\tPPL: 4.61752\tbleu: 30.27509\tLR: 0.00003529\t\n", "Steps: 281000\tLoss: 40243.10938\tPPL: 4.62240\tbleu: 30.13151\tLR: 0.00002471\t\n", "Steps: 282000\tLoss: 40242.69141\tPPL: 4.62232\tbleu: 30.14656\tLR: 0.00002471\t\n", "Steps: 283000\tLoss: 40270.61719\tPPL: 4.62723\tbleu: 30.16707\tLR: 0.00002471\t\n", "Steps: 284000\tLoss: 40262.34766\tPPL: 4.62578\tbleu: 30.14547\tLR: 0.00002471\t\n", "Steps: 285000\tLoss: 40268.48047\tPPL: 4.62686\tbleu: 30.09764\tLR: 0.00002471\t\n", "Steps: 286000\tLoss: 40247.50000\tPPL: 4.62317\tbleu: 30.22054\tLR: 0.00002471\t\n", "Steps: 287000\tLoss: 40190.99219\tPPL: 4.61324\tbleu: 30.37087\tLR: 0.00001729\t\n", "Steps: 288000\tLoss: 40156.46875\tPPL: 4.60718\tbleu: 30.30116\tLR: 0.00001729\t\n", "Steps: 289000\tLoss: 40172.62109\tPPL: 4.61002\tbleu: 30.37191\tLR: 0.00001729\t\n", "Steps: 290000\tLoss: 40156.46484\tPPL: 4.60718\tbleu: 30.32872\tLR: 0.00001729\t\n", "Steps: 291000\tLoss: 40228.38281\tPPL: 4.61981\tbleu: 30.31432\tLR: 0.00001729\t\n", "Steps: 292000\tLoss: 40207.81250\tPPL: 4.61619\tbleu: 30.36351\tLR: 0.00001729\t\n" ], "name": "stdout" } ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "66WhRE9lIhoD", "outputId": "31634857-943f-48d5-d4b3-266759f4eb43", "colab": { "base_uri": "https://localhost:8080/", "height": 71 } }, "source": [ "# Test our model\n", "! cd joeynmt; python3 -m joeynmt test \"$gdrive_path/models/${src}${tgt}_transformer/config.yaml\"\n" ], "execution_count": 12, "outputs": [ { "output_type": "stream", "text": [ "2020-04-12 09:36:10,951 Hello! This is Joey-NMT.\n", "2020-04-12 09:36:40,192 dev bleu: 30.48 [Beam search decoding with beam size = 5 and alpha = 1.0]\n", "2020-04-12 09:37:12,208 test bleu: 39.44 [Beam search decoding with beam size = 5 and alpha = 1.0]\n" ], "name": "stdout" } ] }, { "cell_type": "code", "metadata": { "id": "KaXDFfm-zgjK", "colab_type": "code", "colab": {} }, "source": [ "" ], "execution_count": 0, "outputs": [] } ] }