{ "cells": [ { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "view-in-github" }, "source": [ "\"Open" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "Igc5itf-xMGj" }, "source": [ "# Masakhane - Machine Translation for African Languages (Using JoeyNMT)" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "x4fXCKCf36IK" }, "source": [ "## Note before beginning:\n", "### - The idea is that you should be able to make minimal changes to this in order to get SOME result for your own translation corpus. \n", "\n", "### - The tl;dr: Go to the **\"TODO\"** comments which will tell you what to update to get up and running\n", "\n", "### - If you actually want to have a clue what you're doing, read the text and peek at the links\n", "\n", "### - With 100 epochs, it should take around 7 hours to run in Google Colab\n", "\n", "### - Once you've gotten a result for your language, please attach and email your notebook that generated it to masakhanetranslation@gmail.com\n", "\n", "### - If you care enough and get a chance, doing a brief background on your language would be amazing. See examples in [(Martinus, 2019)](https://arxiv.org/abs/1906.05685)" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "l929HimrxS0a" }, "source": [ "## Retrieve your data & make a parallel corpus\n", "\n", "If you are wanting to use the JW300 data referenced on the Masakhane website or in our GitHub repo, you can use `opus-tools` to convert the data into a convenient format. `opus_read` from that package provides a convenient tool for reading the native aligned XML files and to convert them to TMX format. The tool can also be used to fetch relevant files from OPUS on the fly and to filter the data as necessary. [Read the documentation](https://pypi.org/project/opustools-pkg/) for more details.\n", "\n", "Once you have your corpus files in TMX format (an xml structure which will include the sentences in your target language and your source language in a single file), we recommend reading them into a pandas dataframe. Thankfully, Jade wrote a silly `tmx2dataframe` package which converts your tmx file to a pandas dataframe. " ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 122 }, "colab_type": "code", "id": "oGRmDELn7Az0", "outputId": "ccea5c09-bc5a-4a84-9818-b4271b72dc38" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%3awg%3aoauth%3a2.0%3aoob&response_type=code&scope=email%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdocs.test%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive.photos.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fpeopleapi.readonly\n", "\n", "Enter your authorization code:\n", "··········\n", "Mounted at /content/drive\n" ] } ], "source": [ "from google.colab import drive\n", "drive.mount('/content/drive')" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "Cn3tgQLzUxwn" }, "outputs": [], "source": [ "# TODO: Set your source and target languages. Keep in mind, these traditionally use language codes as found here:\n", "# These will also become the suffix's of all vocab and corpus files used throughout\n", "import os\n", "source_language = \"en\"\n", "target_language = \"efi\" \n", "lc = False # If True, lowercase the data.\n", "seed = 42 # Random seed for shuffling.\n", "tag = \"baseline\" # Give a unique name to your folder - this is to ensure you don't rewrite any models you've already submitted\n", "\n", "os.environ[\"src\"] = source_language # Sets them in bash as well, since we often use bash scripts\n", "os.environ[\"tgt\"] = target_language\n", "os.environ[\"tag\"] = tag\n", "\n", "# This will save it to a folder in our gdrive instead! \n", "!mkdir -p \"/content/drive/My Drive/masakhane/$src-$tgt-$tag\"\n", "g_drive_path = \"/content/drive/My Drive/masakhane/%s-%s-%s\" % (source_language, target_language, tag)\n", "os.environ[\"gdrive_path\"] = g_drive_path\n", "models_path = '%s/models/%s%s_transformer'% (g_drive_path, source_language, target_language)\n", "# model temporary directory for training\n", "model_temp_dir = \"/content/drive/My Drive/masakhane/model-temp\"\n", "# model permanent storage on the drive\n", "!mkdir -p \"$gdrive_path/models/${src}${tgt}_transformer/\"" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 34 }, "colab_type": "code", "id": "kBSgJHEw7Nvx", "outputId": "a3167fc9-7bfb-44c1-e0b2-6232350a7e20" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "/content/drive/My Drive/masakhane/en-efi-baseline\n" ] } ], "source": [ "!echo $gdrive_path" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 34 }, "colab_type": "code", "id": "gA75Fs9ys8Y9", "outputId": "4286ba7f-2e11-4366-e034-abdb843c5593" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Requirement already satisfied: opustools-pkg in /usr/local/lib/python3.6/dist-packages (0.0.52)\n" ] } ], "source": [ "#TODO: Skip for retrain\n", "# Install opus-tools\n", "! pip install opustools-pkg " ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 221 }, "colab_type": "code", "id": "xq-tDZVks7ZD", "outputId": "724f71b4-2db9-486e-93d2-56d2f3d495bc" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "Alignment file /proj/nlpl/data/OPUS/JW300/latest/xml/efi-en.xml.gz not found. The following files are available for downloading:\n", "\n", " 3 MB https://object.pouta.csc.fi/OPUS-JW300/v1/xml/efi-en.xml.gz\n", " 36 MB https://object.pouta.csc.fi/OPUS-JW300/v1/xml/efi.zip\n", " 263 MB https://object.pouta.csc.fi/OPUS-JW300/v1/xml/en.zip\n", "\n", " 303 MB Total size\n", "./JW300_latest_xml_efi-en.xml.gz ... 100% of 3 MB\n", "./JW300_latest_xml_efi.zip ... 100% of 36 MB\n", "./JW300_latest_xml_en.zip ... 100% of 263 MB\n", "gzip: JW300_latest_xml_en-efi.xml.gz: No such file or directory\n" ] } ], "source": [ "#TODO: Skip for retrain\n", "# Downloading our corpus\n", "! opus_read -d JW300 -s $src -t $tgt -wm moses -w jw300.$src jw300.$tgt -q\n", "\n", "# extract the corpus file\n", "! gunzip JW300_latest_xml_$src-$tgt.xml.gz" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "j2K6QK2NOaUX" }, "outputs": [], "source": [ "# extract the corpus file\n", "! gunzip JW300_latest_xml_$tgt-$src.xml.gz" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 578 }, "colab_type": "code", "id": "n48GDRnP8y2G", "outputId": "3c765279-6999-4977-c553-17cc87982fc0" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "--2020-04-07 10:41:12-- https://raw.githubusercontent.com/juliakreutzer/masakhane/master/jw300_utils/test/test.en-any.en\n", "Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.0.133, 151.101.64.133, 151.101.128.133, ...\n", "Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.0.133|:443... connected.\n", "HTTP request sent, awaiting response... 200 OK\n", "Length: 277791 (271K) [text/plain]\n", "Saving to: ‘test.en-any.en’\n", "\n", "\r", "test.en-any.en 0%[ ] 0 --.-KB/s \r", "test.en-any.en 100%[===================>] 271.28K --.-KB/s in 0.1s \n", "\n", "2020-04-07 10:41:13 (2.35 MB/s) - ‘test.en-any.en’ saved [277791/277791]\n", "\n", "--2020-04-07 10:41:15-- https://raw.githubusercontent.com/juliakreutzer/masakhane/master/jw300_utils/test/test.en-efi.en\n", "Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.0.133, 151.101.64.133, 151.101.128.133, ...\n", "Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.0.133|:443... connected.\n", "HTTP request sent, awaiting response... 200 OK\n", "Length: 203603 (199K) [text/plain]\n", "Saving to: ‘test.en-efi.en’\n", "\n", "test.en-efi.en 100%[===================>] 198.83K --.-KB/s in 0.09s \n", "\n", "2020-04-07 10:41:16 (2.07 MB/s) - ‘test.en-efi.en’ saved [203603/203603]\n", "\n", "--2020-04-07 10:41:20-- https://raw.githubusercontent.com/juliakreutzer/masakhane/master/jw300_utils/test/test.en-efi.efi\n", "Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.0.133, 151.101.64.133, 151.101.128.133, ...\n", "Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.0.133|:443... connected.\n", "HTTP request sent, awaiting response... 200 OK\n", "Length: 229202 (224K) [text/plain]\n", "Saving to: ‘test.en-efi.efi’\n", "\n", "test.en-efi.efi 100%[===================>] 223.83K --.-KB/s in 0.1s \n", "\n", "2020-04-07 10:41:20 (2.19 MB/s) - ‘test.en-efi.efi’ saved [229202/229202]\n", "\n" ] } ], "source": [ "#TODO: Skip for retrain\n", "# Download the global test set.\n", "! wget https://raw.githubusercontent.com/juliakreutzer/masakhane/master/jw300_utils/test/test.en-any.en\n", " \n", "# And the specific test set for this language pair.\n", "os.environ[\"trg\"] = target_language \n", "os.environ[\"src\"] = source_language \n", "\n", "! wget https://raw.githubusercontent.com/juliakreutzer/masakhane/master/jw300_utils/test/test.en-$trg.en \n", "! mv test.en-$trg.en test.en\n", "! wget https://raw.githubusercontent.com/juliakreutzer/masakhane/master/jw300_utils/test/test.en-$trg.$trg \n", "! mv test.en-$trg.$trg test.$trg" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 34 }, "colab_type": "code", "id": "NqDG-CI28y2L", "outputId": "ae596401-d6d3-4bb0-84d1-b2955c623cf1" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Loaded 3571 global test sentences to filter from the training/dev data.\n" ] } ], "source": [ "#TODO: Skip for retrain\n", "# Read the test data to filter from train and dev splits.\n", "# Store english portion in set for quick filtering checks.\n", "en_test_sents = set()\n", "filter_test_sents = \"test.en-any.en\"\n", "j = 0\n", "with open(filter_test_sents) as f:\n", " for line in f:\n", " en_test_sents.add(line.strip())\n", " j += 1\n", "print('Loaded {} global test sentences to filter from the training/dev data.'.format(j))" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 159 }, "colab_type": "code", "id": "3CNdwLBCfSIl", "outputId": "51262d44-631c-494d-e3e8-c8547e74b8d9" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Loaded data and skipped 6113/377824 lines since contained in test set.\n" ] }, { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
source_sentencetarget_sentence
0© 2013 Watch Tower Bible and Tract Society of ...© 2013 Watch Tower Bible and Tract Society of ...
1All rights reserved .All rights reserved .
23 Watching the World3 Se Itịbede ke Ererimbot
\n", "
" ], "text/plain": [ " source_sentence target_sentence\n", "0 © 2013 Watch Tower Bible and Tract Society of ... © 2013 Watch Tower Bible and Tract Society of ...\n", "1 All rights reserved . All rights reserved .\n", "2 3 Watching the World 3 Se Itịbede ke Ererimbot" ] }, "execution_count": 22, "metadata": { "tags": [] }, "output_type": "execute_result" } ], "source": [ "#TODO: Skip for retrain\n", "import pandas as pd\n", "\n", "# TMX file to dataframe\n", "source_file = 'jw300.' + source_language\n", "target_file = 'jw300.' + target_language\n", "\n", "source = []\n", "target = []\n", "skip_lines = [] # Collect the line numbers of the source portion to skip the same lines for the target portion.\n", "with open(source_file) as f:\n", " for i, line in enumerate(f):\n", " # Skip sentences that are contained in the test set.\n", " if line.strip() not in en_test_sents:\n", " source.append(line.strip())\n", " else:\n", " skip_lines.append(i) \n", "with open(target_file) as f:\n", " for j, line in enumerate(f):\n", " # Only add to corpus if corresponding source was not skipped.\n", " if j not in skip_lines:\n", " target.append(line.strip())\n", " \n", "print('Loaded data and skipped {}/{} lines since contained in test set.'.format(len(skip_lines), i))\n", " \n", "df = pd.DataFrame(zip(source, target), columns=['source_sentence', 'target_sentence'])\n", "# if you get TypeError: data argument can't be an iterator is because of your zip version run this below\n", "#df = pd.DataFrame(list(zip(source, target)), columns=['source_sentence', 'target_sentence'])\n", "df.head(3)" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "YkuK3B4p2AkN" }, "source": [ "## Pre-processing and export\n", "\n", "It is generally a good idea to remove duplicate translations and conflicting translations from the corpus. In practice, these public corpora include some number of these that need to be cleaned.\n", "\n", "In addition we will split our data into dev/test/train and export to the filesystem." ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 187 }, "colab_type": "code", "id": "M_2ouEOH1_1q", "outputId": "35d2bc54-ffce-4d04-decd-5269409e8d0d" }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:6: SettingWithCopyWarning: \n", "A value is trying to be set on a copy of a slice from a DataFrame\n", "\n", "See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n", " \n", "/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:7: SettingWithCopyWarning: \n", "A value is trying to be set on a copy of a slice from a DataFrame\n", "\n", "See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n", " import sys\n" ] } ], "source": [ "#TODO: Skip for retrain\n", "# drop duplicate translations\n", "df_pp = df.drop_duplicates()\n", "\n", "# drop conflicting translations\n", "# (this is optional and something that you might want to comment out \n", "# depending on the size of your corpus)\n", "df_pp.drop_duplicates(subset='source_sentence', inplace=True)\n", "df_pp.drop_duplicates(subset='target_sentence', inplace=True)\n", "\n", "# Shuffle the data to remove bias in dev set selection.\n", "df_pp = df_pp.sample(frac=1, random_state=seed).reset_index(drop=True)" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 1000 }, "colab_type": "code", "id": "Z_1BwAApEtMk", "outputId": "e2e52063-3afc-44d6-eb80-4593c78529d9" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Collecting fuzzywuzzy\n", " Downloading https://files.pythonhosted.org/packages/43/ff/74f23998ad2f93b945c0309f825be92e04e0348e062026998b5eefef4c33/fuzzywuzzy-0.18.0-py2.py3-none-any.whl\n", "Installing collected packages: fuzzywuzzy\n", "Successfully installed fuzzywuzzy-0.18.0\n", "Collecting python-Levenshtein\n", "\u001b[?25l Downloading https://files.pythonhosted.org/packages/42/a9/d1785c85ebf9b7dfacd08938dd028209c34a0ea3b1bcdb895208bd40a67d/python-Levenshtein-0.12.0.tar.gz (48kB)\n", "\u001b[K |████████████████████████████████| 51kB 911kB/s \n", "\u001b[?25hRequirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from python-Levenshtein) (46.1.3)\n", "Building wheels for collected packages: python-Levenshtein\n", " Building wheel for python-Levenshtein (setup.py) ... \u001b[?25l\u001b[?25hdone\n", " Created wheel for python-Levenshtein: filename=python_Levenshtein-0.12.0-cp36-cp36m-linux_x86_64.whl size=144801 sha256=9a3225ef63aa0c469b1c17d2027ee01d92c08cfa2ceb1c887a5348413e5ae974\n", " Stored in directory: /root/.cache/pip/wheels/de/c2/93/660fd5f7559049268ad2dc6d81c4e39e9e36518766eaf7e342\n", "Successfully built python-Levenshtein\n", "Installing collected packages: python-Levenshtein\n", "Successfully installed python-Levenshtein-0.12.0\n", "00:00:00.13 0.00 percent complete\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "WARNING:root:Applied processor reduces input query to empty string, all comparisons will have score 0. [Query: '— ― ― ― ― ― ― ―']\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "00:00:23.47 0.30 percent complete\n", "00:00:47.23 0.59 percent complete\n", "00:01:13.23 0.89 percent complete\n", "00:01:37.25 1.19 percent complete\n", "00:02:00.91 1.48 percent complete\n", "00:02:25.30 1.78 percent complete\n", "00:02:48.69 2.08 percent complete\n", "00:03:12.44 2.37 percent complete\n", "00:03:36.68 2.67 percent complete\n", "00:04:00.63 2.97 percent complete\n", "00:04:26.31 3.26 percent complete\n", "00:04:50.18 3.56 percent complete\n", "00:05:14.62 3.86 percent complete\n", "00:05:38.29 4.15 percent complete\n", "00:06:01.78 4.45 percent complete\n", "00:06:25.15 4.75 percent complete\n", "00:06:48.09 5.04 percent complete\n", "00:07:11.21 5.34 percent complete\n", "00:07:35.76 5.64 percent complete\n", "00:07:58.99 5.93 percent complete\n", "00:08:22.42 6.23 percent complete\n", "00:08:46.08 6.53 percent complete\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "WARNING:root:Applied processor reduces input query to empty string, all comparisons will have score 0. [Query: '↓']\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "00:09:09.18 6.82 percent complete\n", "00:09:32.55 7.12 percent complete\n", "00:09:55.41 7.42 percent complete\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "WARNING:root:Applied processor reduces input query to empty string, all comparisons will have score 0. [Query: '( — )']\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "00:10:19.19 7.71 percent complete\n", "00:10:44.31 8.01 percent complete\n", "00:11:07.99 8.31 percent complete\n", "00:11:31.20 8.60 percent complete\n", "00:11:55.25 8.90 percent complete\n", "00:12:18.14 9.20 percent complete\n", "00:12:41.67 9.49 percent complete\n", "00:13:05.32 9.79 percent complete\n", "00:13:28.60 10.09 percent complete\n", "00:13:53.16 10.38 percent complete\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "WARNING:root:Applied processor reduces input query to empty string, all comparisons will have score 0. [Query: '․ ․']\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "00:14:17.50 10.68 percent complete\n", "00:14:40.37 10.98 percent complete\n", "00:15:03.19 11.27 percent complete\n", "00:15:25.76 11.57 percent complete\n", "00:15:48.21 11.87 percent complete\n", "00:16:09.87 12.16 percent complete\n", "00:16:32.29 12.46 percent complete\n", "00:16:54.84 12.76 percent complete\n", "00:17:17.29 13.05 percent complete\n", "00:17:39.13 13.35 percent complete\n", "00:18:01.50 13.65 percent complete\n", "00:18:23.71 13.94 percent complete\n", "00:18:45.92 14.24 percent complete\n", "00:19:08.43 14.54 percent complete\n", "00:19:31.39 14.83 percent complete\n", "00:19:54.26 15.13 percent complete\n", "00:20:17.94 15.43 percent complete\n", "00:20:40.82 15.72 percent complete\n", "00:21:03.51 16.02 percent complete\n", "00:21:26.55 16.32 percent complete\n", "00:21:49.64 16.61 percent complete\n", "00:22:13.80 16.91 percent complete\n", "00:22:37.37 17.21 percent complete\n", "00:23:00.26 17.50 percent complete\n", "00:23:25.15 17.80 percent complete\n", "00:23:48.26 18.10 percent complete\n", "00:24:11.21 18.39 percent complete\n", "00:24:34.04 18.69 percent complete\n", "00:24:56.74 18.99 percent complete\n", "00:25:19.01 19.28 percent complete\n", "00:25:41.69 19.58 percent complete\n", "00:26:04.84 19.88 percent complete\n", "00:26:29.16 20.17 percent complete\n", "00:26:52.43 20.47 percent complete\n", "00:27:15.01 20.77 percent complete\n", "00:27:37.87 21.06 percent complete\n", "00:28:00.65 21.36 percent complete\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "WARNING:root:Applied processor reduces input query to empty string, all comparisons will have score 0. [Query: '” *']\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "00:28:23.40 21.66 percent complete\n", "00:28:46.27 21.95 percent complete\n", "00:29:09.06 22.25 percent complete\n", "00:29:33.11 22.55 percent complete\n", "00:29:57.11 22.84 percent complete\n", "00:30:20.14 23.14 percent complete\n", "00:30:43.82 23.44 percent complete\n", "00:31:06.76 23.73 percent complete\n", "00:31:29.88 24.03 percent complete\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "WARNING:root:Applied processor reduces input query to empty string, all comparisons will have score 0. [Query: '↓ ↓']\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "00:31:52.30 24.33 percent complete\n", "00:32:15.66 24.62 percent complete\n", "00:32:40.29 24.92 percent complete\n", "00:33:02.85 25.22 percent complete\n", "00:33:25.97 25.51 percent complete\n", "00:33:48.88 25.81 percent complete\n", "00:34:11.92 26.11 percent complete\n", "00:34:35.40 26.40 percent complete\n", "00:34:58.44 26.70 percent complete\n", "00:35:20.72 27.00 percent complete\n", "00:35:45.28 27.29 percent complete\n", "00:36:08.34 27.59 percent complete\n", "00:36:30.78 27.89 percent complete\n", "00:36:53.86 28.18 percent complete\n", "00:37:16.93 28.48 percent complete\n", "00:37:39.37 28.78 percent complete\n", "00:38:01.55 29.07 percent complete\n", "00:38:24.13 29.37 percent complete\n", "00:38:46.95 29.67 percent complete\n", "00:39:09.10 29.96 percent complete\n", "00:39:31.78 30.26 percent complete\n", "00:39:54.70 30.56 percent complete\n", "00:40:17.46 30.85 percent complete\n", "00:40:39.90 31.15 percent complete\n", "00:41:02.42 31.45 percent complete\n", "00:41:25.00 31.74 percent complete\n", "00:41:50.09 32.04 percent complete\n", "00:42:13.37 32.34 percent complete\n", "00:42:36.77 32.63 percent complete\n", "00:42:59.84 32.93 percent complete\n", "00:43:22.86 33.22 percent complete\n", "00:43:45.40 33.52 percent complete\n", "00:44:07.65 33.82 percent complete\n", "00:44:30.24 34.11 percent complete\n", "00:44:55.85 34.41 percent complete\n", "00:45:19.59 34.71 percent complete\n", "00:45:41.81 35.00 percent complete\n", "00:46:04.30 35.30 percent complete\n", "00:46:26.59 35.60 percent complete\n", "00:46:49.92 35.89 percent complete\n", "00:47:12.58 36.19 percent complete\n", "00:47:34.90 36.49 percent complete\n", "00:47:58.93 36.78 percent complete\n", "00:48:22.48 37.08 percent complete\n", "00:48:44.88 37.38 percent complete\n", "00:49:07.35 37.67 percent complete\n", "00:49:30.35 37.97 percent complete\n", "00:49:52.94 38.27 percent complete\n", "00:50:14.24 38.56 percent complete\n", "00:50:36.92 38.86 percent complete\n", "00:50:59.67 39.16 percent complete\n", "00:51:24.09 39.45 percent complete\n", "00:51:46.38 39.75 percent complete\n", "00:52:09.23 40.05 percent complete\n", "00:52:32.24 40.34 percent complete\n", "00:52:54.81 40.64 percent complete\n", "00:53:17.69 40.94 percent complete\n", "00:53:39.72 41.23 percent complete\n", "00:54:01.82 41.53 percent complete\n", "00:54:26.20 41.83 percent complete\n", "00:54:48.55 42.12 percent complete\n", "00:55:10.52 42.42 percent complete\n", "00:55:32.68 42.72 percent complete\n", "00:55:55.67 43.01 percent complete\n", "00:56:18.44 43.31 percent complete\n", "00:56:40.94 43.61 percent complete\n", "00:57:03.85 43.90 percent complete\n", "00:57:27.84 44.20 percent complete\n", "00:57:49.66 44.50 percent complete\n", "00:58:11.60 44.79 percent complete\n", "00:58:33.86 45.09 percent complete\n", "00:58:55.68 45.39 percent complete\n", "00:59:18.18 45.68 percent complete\n", "00:59:40.52 45.98 percent complete\n", "01:00:03.42 46.28 percent complete\n", "01:00:27.65 46.57 percent complete\n", "01:00:49.76 46.87 percent complete\n", "01:01:12.39 47.17 percent complete\n", "01:01:35.22 47.46 percent complete\n", "01:01:58.23 47.76 percent complete\n", "01:02:20.14 48.06 percent complete\n", "01:02:42.49 48.35 percent complete\n", "01:03:04.59 48.65 percent complete\n", "01:03:28.70 48.95 percent complete\n", "01:03:51.20 49.24 percent complete\n", "01:04:13.59 49.54 percent complete\n", "01:04:35.44 49.84 percent complete\n", "01:04:58.54 50.13 percent complete\n", "01:05:21.00 50.43 percent complete\n", "01:05:43.43 50.73 percent complete\n", "01:06:05.97 51.02 percent complete\n", "01:06:30.57 51.32 percent complete\n", "01:06:52.87 51.62 percent complete\n", "01:07:15.94 51.91 percent complete\n", "01:07:38.22 52.21 percent complete\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "WARNING:root:Applied processor reduces input query to empty string, all comparisons will have score 0. [Query: '']\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "01:08:00.90 52.51 percent complete\n", "01:08:22.99 52.80 percent complete\n", "01:08:45.79 53.10 percent complete\n", "01:09:08.16 53.40 percent complete\n", "01:09:32.58 53.69 percent complete\n", "01:09:55.15 53.99 percent complete\n", "01:10:18.09 54.29 percent complete\n", "01:10:40.50 54.58 percent complete\n", "01:11:02.40 54.88 percent complete\n", "01:11:25.19 55.18 percent complete\n", "01:11:47.70 55.47 percent complete\n", "01:12:10.46 55.77 percent complete\n", "01:12:34.70 56.07 percent complete\n", "01:12:57.22 56.36 percent complete\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "WARNING:root:Applied processor reduces input query to empty string, all comparisons will have score 0. [Query: '․ ․ ․ ․ ․']\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "01:13:19.96 56.66 percent complete\n", "01:13:42.75 56.96 percent complete\n", "01:14:05.51 57.25 percent complete\n", "01:14:27.86 57.55 percent complete\n", "01:14:50.01 57.85 percent complete\n", "01:15:12.64 58.14 percent complete\n", "01:15:39.06 58.44 percent complete\n", "01:16:01.64 58.74 percent complete\n", "01:16:24.32 59.03 percent complete\n", "01:16:47.36 59.33 percent complete\n", "01:17:09.02 59.63 percent complete\n", "01:17:32.17 59.92 percent complete\n", "01:17:54.54 60.22 percent complete\n", "01:18:17.28 60.52 percent complete\n", "01:18:42.43 60.81 percent complete\n", "01:19:05.78 61.11 percent complete\n", "01:19:27.88 61.41 percent complete\n", "01:19:49.97 61.70 percent complete\n", "01:20:12.37 62.00 percent complete\n", "01:20:34.77 62.30 percent complete\n", "01:20:57.64 62.59 percent complete\n", "01:21:20.01 62.89 percent complete\n", "01:21:43.90 63.19 percent complete\n", "01:22:06.63 63.48 percent complete\n", "01:22:29.08 63.78 percent complete\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "WARNING:root:Applied processor reduces input query to empty string, all comparisons will have score 0. [Query: '*']\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "01:22:51.12 64.08 percent complete\n", "01:23:14.11 64.37 percent complete\n", "01:23:37.09 64.67 percent complete\n", "01:24:00.07 64.97 percent complete\n", "01:24:23.24 65.26 percent complete\n", "01:24:46.84 65.56 percent complete\n", "01:25:11.42 65.86 percent complete\n", "01:25:34.14 66.15 percent complete\n", "01:25:57.85 66.45 percent complete\n", "01:26:21.15 66.75 percent complete\n", "01:26:44.18 67.04 percent complete\n", "01:27:06.11 67.34 percent complete\n", "01:27:28.17 67.64 percent complete\n", "01:27:50.68 67.93 percent complete\n", "01:28:16.04 68.23 percent complete\n", "01:28:38.68 68.53 percent complete\n", "01:29:01.66 68.82 percent complete\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "WARNING:root:Applied processor reduces input query to empty string, all comparisons will have score 0. [Query: '”']\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "01:29:23.92 69.12 percent complete\n", "01:29:47.07 69.42 percent complete\n", "01:30:10.07 69.71 percent complete\n", "01:30:34.07 70.01 percent complete\n", "01:30:57.53 70.31 percent complete\n", "01:31:24.58 70.60 percent complete\n", "01:31:47.52 70.90 percent complete\n", "01:32:10.81 71.20 percent complete\n", "01:32:33.92 71.49 percent complete\n", "01:32:57.01 71.79 percent complete\n", "01:33:18.97 72.09 percent complete\n", "01:33:41.68 72.38 percent complete\n", "01:34:04.01 72.68 percent complete\n", "01:34:29.55 72.98 percent complete\n", "01:34:53.10 73.27 percent complete\n", "01:35:16.17 73.57 percent complete\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "WARNING:root:Applied processor reduces input query to empty string, all comparisons will have score 0. [Query: '⇩']\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "01:35:39.54 73.87 percent complete\n", "01:36:02.24 74.16 percent complete\n", "01:36:25.48 74.46 percent complete\n", "01:36:48.49 74.76 percent complete\n", "01:37:10.46 75.05 percent complete\n", "01:37:36.30 75.35 percent complete\n", "01:37:59.14 75.65 percent complete\n", "01:38:22.44 75.94 percent complete\n", "01:38:44.61 76.24 percent complete\n", "01:39:06.57 76.54 percent complete\n", "01:39:29.21 76.83 percent complete\n", "01:39:52.37 77.13 percent complete\n", "01:40:15.33 77.43 percent complete\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "WARNING:root:Applied processor reduces input query to empty string, all comparisons will have score 0. [Query: '↓ ↓ ↓ ↓']\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "01:40:41.48 77.72 percent complete\n", "01:41:04.55 78.02 percent complete\n", "01:41:27.18 78.32 percent complete\n", "01:41:51.03 78.61 percent complete\n", "01:42:15.09 78.91 percent complete\n", "01:42:39.00 79.21 percent complete\n", "01:43:02.63 79.50 percent complete\n", "01:43:25.90 79.80 percent complete\n", "01:43:51.44 80.10 percent complete\n", "01:44:14.54 80.39 percent complete\n", "01:44:37.70 80.69 percent complete\n", "01:45:01.10 80.99 percent complete\n", "01:45:24.57 81.28 percent complete\n", "01:45:48.03 81.58 percent complete\n", "01:46:11.43 81.88 percent complete\n", "01:46:34.64 82.17 percent complete\n", "01:47:00.82 82.47 percent complete\n", "01:47:23.07 82.77 percent complete\n", "01:47:46.29 83.06 percent complete\n", "01:48:08.38 83.36 percent complete\n", "01:48:31.21 83.66 percent complete\n", "01:48:53.93 83.95 percent complete\n", "01:49:17.20 84.25 percent complete\n", "01:49:39.98 84.55 percent complete\n", "01:50:05.94 84.84 percent complete\n", "01:50:28.82 85.14 percent complete\n", "01:50:51.72 85.44 percent complete\n", "01:51:15.22 85.73 percent complete\n", "01:51:37.48 86.03 percent complete\n", "01:52:00.25 86.33 percent complete\n", "01:52:22.75 86.62 percent complete\n", "01:52:46.56 86.92 percent complete\n", "01:53:11.50 87.22 percent complete\n", "01:53:34.44 87.51 percent complete\n", "01:53:57.30 87.81 percent complete\n", "01:54:20.34 88.11 percent complete\n", "01:54:42.83 88.40 percent complete\n", "01:55:05.72 88.70 percent complete\n", "01:55:28.96 89.00 percent complete\n", "01:55:51.42 89.29 percent complete\n", "01:56:17.70 89.59 percent complete\n", "01:56:42.54 89.89 percent complete\n", "01:57:06.63 90.18 percent complete\n", "01:57:30.01 90.48 percent complete\n", "01:57:53.28 90.78 percent complete\n", "01:58:16.94 91.07 percent complete\n", "01:58:39.94 91.37 percent complete\n", "01:59:03.14 91.67 percent complete\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "WARNING:root:Applied processor reduces input query to empty string, all comparisons will have score 0. [Query: '\\']\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "01:59:27.71 91.96 percent complete\n", "01:59:53.34 92.26 percent complete\n", "02:00:15.96 92.56 percent complete\n", "02:00:38.31 92.85 percent complete\n", "02:01:01.63 93.15 percent complete\n", "02:01:25.17 93.45 percent complete\n", "02:01:47.62 93.74 percent complete\n", "02:02:10.33 94.04 percent complete\n", "02:02:33.31 94.34 percent complete\n", "02:02:57.88 94.63 percent complete\n", "02:03:19.49 94.93 percent complete\n", "02:03:42.46 95.23 percent complete\n", "02:04:04.82 95.52 percent complete\n", "02:04:27.72 95.82 percent complete\n", "02:04:50.36 96.12 percent complete\n", "02:05:13.34 96.41 percent complete\n", "02:05:35.17 96.71 percent complete\n", "02:06:00.93 97.01 percent complete\n", "02:06:23.25 97.30 percent complete\n", "02:06:45.72 97.60 percent complete\n", "02:07:07.80 97.89 percent complete\n", "02:07:30.31 98.19 percent complete\n", "02:07:52.66 98.49 percent complete\n", "02:08:14.26 98.78 percent complete\n", "02:08:36.20 99.08 percent complete\n", "02:09:01.21 99.38 percent complete\n", "02:09:23.58 99.67 percent complete\n", "02:09:46.32 99.97 percent complete\n" ] } ], "source": [ "#TODO: Skip for retrain\n", "# Install fuzzy wuzzy to remove \"almost duplicate\" sentences in the\n", "# test and training sets.\n", "! pip install fuzzywuzzy\n", "! pip install python-Levenshtein\n", "import time\n", "from fuzzywuzzy import process\n", "import numpy as np\n", "\n", "# reset the index of the training set after previous filtering\n", "df_pp.reset_index(drop=False, inplace=True)\n", "\n", "# Remove samples from the training data set if they \"almost overlap\" with the\n", "# samples in the test set.\n", "\n", "# Filtering function. Adjust pad to narrow down the candidate matches to\n", "# within a certain length of characters of the given sample.\n", "def fuzzfilter(sample, candidates, pad):\n", " candidates = [x for x in candidates if len(x) <= len(sample)+pad and len(x) >= len(sample)-pad] \n", " if len(candidates) > 0:\n", " return process.extractOne(sample, candidates)[1]\n", " else:\n", " return np.nan\n", "\n", "# NOTE - This might run slow depending on the size of your training set. We are\n", "# printing some information to help you track how long it would take. \n", "scores = []\n", "start_time = time.time()\n", "for idx, row in df_pp.iterrows():\n", " scores.append(fuzzfilter(row['source_sentence'], list(en_test_sents), 5))\n", " if idx % 1000 == 0:\n", " hours, rem = divmod(time.time() - start_time, 3600)\n", " minutes, seconds = divmod(rem, 60)\n", " print(\"{:0>2}:{:0>2}:{:05.2f}\".format(int(hours),int(minutes),seconds), \"%0.2f percent complete\" % (100.0*float(idx)/float(len(df_pp))))\n", "\n", "# Filter out \"almost overlapping samples\"\n", "df_pp['scores'] = scores\n", "df_pp = df_pp[df_pp['scores'] < 95]" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 819 }, "colab_type": "code", "id": "hxxBOCA-xXhy", "outputId": "2280d2fc-21de-4059-f546-53412f256823" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "==> train.efi <==\n", "Isaiah 9 : 7 ọdọho ke Eyen Abasi edidi Edidem ye nte ke enye ayanam ediwak nti n̄kpọ ọnọ ubonowo . “ Ifịk Jehovah mme udịm edinam emi . ”\n", "The New Encyclopædia Britannica ọdọhọ ke Mme Ntiense Jehovah “ ẹdu uwem nte Bible etemede . ”\n", "Mmọ ẹkesụk ẹdu ke ini emi wheat ye mbiet ẹkọride ọtọkiet , ndien owo ikokụreke kan̄a ndutịm oro ẹkenamde man ẹnyene mbon emi ẹdisinọde mme owo udia eke spirit .\n", "SIO INI NỊM NDINAM ITIE UFAN ỌKỌRI .\n", "Ndien ami nyeben̄e ekụri nsiak ifia nnịm nnọ enye edida etem udia .\n", "Ini kiet , mma ntọhọ nnyụn̄ n̄n̄wana ye owo unek emi eketiede ubi ubi , nnyụn̄ mmia unamikọt nsio ntop nduọk ko !\n", "Edi Andibot ọmọn̄wọn̄ọ ete ke imọ iyọsọp ida utịt isọk ererimbot n̄kaowo oro odude ke emi ke idak ukara Satan kpa Devil .\n", "Ami ye Roy ima idomo ndidu uwem ekekem ye enyịn̄ oro ebe ke ndibuana ke kpukpru usụn̄ ukwọrọikọ ye ubịnikọt oro esop ekesịnde udọn̄ ọnọ .\n", "T .\n", "Sylvia emi edide nurse ọdọhọ ete : “ Ediwak mbon oro ikakade n̄wed ntre ẹma ẹsika ufọkabasi .\n", "\n", "==> train.en <==\n", "Referring to what the rulership of God’s Son will accomplish , Isaiah 9 : 7 says : “ The very zeal of Jehovah of armies will do this . ”\n", "The New Encyclopædia Britannica observes that Jehovah’s Witnesses “ insist upon a high moral code in personal conduct . ”\n", "They were still in the growing season , and the arrangement for a channel to provide spiritual food was still taking shape .\n", "MAKE TIME TO CULTIVATE A FRIENDSHIP .\n", "In the meantime , I would borrow an ax to chop firewood for cooking .\n", "On one occasion , I got into a fight with a sinister - looking customer but handled him easily .\n", "But the Creator has promised that he will soon bring an end to the present world society that is under the control of Satan the Devil .\n", "Roy and I endeavored to live up to that name by sharing in all the preaching methods and campaigns that the organization encouraged .\n", "T .\n", "“ I went to college with many who claimed to be religious , ” says Sylvia , who works in the health - care business .\n", "==> dev.efi <==\n", "Edieke anamde ntre , ọwọrọ ke ememek mfọnn̄kan usụn̄ uwem .\n", "Akam ekeme ndidi se ọkwọrọ ederi eketịn̄de ọnọ mmọ edi oro .\n", "Ẹtịn̄ ukem ikọ oro ke 2 Chronicles 5 : 9 .\n", "Ẹkọbi Paul ẹtem ke ufọk esie ke Rome ke isua iba ( ke n̄kpọ nte isua 59 esịm 61 E.N . ) , ndien enye oyom usụn̄ do ọkwọrọ Obio Ubọn̄ onyụn̄ ekpep mbon en̄wen “ mme n̄kpọ emi ẹban̄ade Ọbọn̄ Jesus Christ . ” — Utom 28 : 30 , 31 .\n", "Jehovah ama odu ye enye . ”\n", "Ikebịghike - bịghi , ye edisio A Facsimile Edition of the Dead Sea Scrolls ( Nsiondi Mme Ata Ata Ikpan̄wed Inyan̄ Inụn̄ ) , ẹma ẹkeme ndikụt mme ndise ikpan̄wed oro owo mîkosioho ke mbemiso mmemmem mmemmem .\n", "Esịt ama enem enye etieti .\n", "Ke 2014 , obufa ọfiọn̄ emi ekperede usen emi uwemeyo ye okoneyo ẹsidide ukem ukem ediduọ ke March 30 , ke ayakde minit 15 ndimia n̄kanika usụkkiet okoneyo ke Jerusalem .\n", "Oro akanam iyom nditiene n̄kwọrọ etop emi .\n", "Mbon oro ẹmade eti n̄kpọ kpọt ẹdinyịme .\n", "\n", "==> dev.en <==\n", "If you do , you will be choosing the best possible way of life .\n", "They may even have been told as much by a clergyman .\n", "The same point is made at 2 Chronicles 5 : 9 .\n", "59 - 61 C.E . ) , and from there he finds ways to preach about the Kingdom and teach “ the things concerning the Lord Jesus Christ . ” ​ — Acts 28 : 30 , 31 .\n", "Jehovah was with him . ”\n", "Before long , with the publication of A Facsimile Edition of the Dead Sea Scrolls , photographs of the previously unpublished scrolls became easily accessible .\n", "What joy that brought her !\n", "( 20 : 45 ) , Jerusalem time . The following sunset in Jerusalem ( March 31 ) will come about 21 hours later .\n", "All the more reason for us to join in the proclamation .\n", "Only people who love what is good will accept him .\n" ] } ], "source": [ "#TODO: Skip for retrain\n", "# This section does the split between train/dev for the parallel corpora then saves them as separate files\n", "# We use 1000 dev test and the given test set.\n", "import csv\n", "\n", "# Do the split between dev/train and create parallel corpora\n", "num_dev_patterns = 1000\n", "\n", "# Optional: lower case the corpora - this will make it easier to generalize, but without proper casing.\n", "if lc: # Julia: making lowercasing optional\n", " df_pp[\"source_sentence\"] = df_pp[\"source_sentence\"].str.lower()\n", " df_pp[\"target_sentence\"] = df_pp[\"target_sentence\"].str.lower()\n", "\n", "# Julia: test sets are already generated\n", "dev = df_pp.tail(num_dev_patterns) # Herman: Error in original\n", "stripped = df_pp.drop(df_pp.tail(num_dev_patterns).index)\n", "\n", "with open(\"train.\"+source_language, \"w\") as src_file, open(\"train.\"+target_language, \"w\") as trg_file:\n", " for index, row in stripped.iterrows():\n", " src_file.write(row[\"source_sentence\"]+\"\\n\")\n", " trg_file.write(row[\"target_sentence\"]+\"\\n\")\n", " \n", "with open(\"dev.\"+source_language, \"w\") as src_file, open(\"dev.\"+target_language, \"w\") as trg_file:\n", " for index, row in dev.iterrows():\n", " src_file.write(row[\"source_sentence\"]+\"\\n\")\n", " trg_file.write(row[\"target_sentence\"]+\"\\n\")\n", "\n", "#stripped[[\"source_sentence\"]].to_csv(\"train.\"+source_language, header=False, index=False) # Herman: Added `header=False` everywhere\n", "#stripped[[\"target_sentence\"]].to_csv(\"train.\"+target_language, header=False, index=False) # Julia: Problematic handling of quotation marks.\n", "\n", "#dev[[\"source_sentence\"]].to_csv(\"dev.\"+source_language, header=False, index=False)\n", "#dev[[\"target_sentence\"]].to_csv(\"dev.\"+target_language, header=False, index=False)\n", "\n", "# Doublecheck the format below. There should be no extra quotation marks or weird characters.\n", "! head train.*\n", "! head dev.*" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "epeCydmCyS8X" }, "source": [ "\n", "\n", "---\n", "\n", "\n", "## Installation of JoeyNMT\n", "\n", "JoeyNMT is a simple, minimalist NMT package which is useful for learning and teaching. Check out the documentation for JoeyNMT [here](https://joeynmt.readthedocs.io) " ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 1000 }, "colab_type": "code", "id": "iBRMm4kMxZ8L", "outputId": "4cd872fa-ba2f-4764-b007-467bcd456fa5" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Cloning into 'joeynmt'...\n", "remote: Enumerating objects: 3, done.\u001b[K\n", "remote: Counting objects: 100% (3/3), done.\u001b[K\n", "remote: Compressing objects: 100% (3/3), done.\u001b[K\n", "remote: Total 2380 (delta 0), reused 0 (delta 0), pack-reused 2377\u001b[K\n", "Receiving objects: 100% (2380/2380), 2.60 MiB | 2.31 MiB/s, done.\n", "Resolving deltas: 100% (1670/1670), done.\n", "Processing /content/joeynmt\n", "Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from joeynmt==0.0.1) (0.16.0)\n", "Requirement already satisfied: pillow in /usr/local/lib/python3.6/dist-packages (from joeynmt==0.0.1) (7.0.0)\n", "Requirement already satisfied: numpy<2.0,>=1.14.5 in /usr/local/lib/python3.6/dist-packages (from joeynmt==0.0.1) (1.18.2)\n", "Requirement already satisfied: setuptools>=41.0.0 in /usr/local/lib/python3.6/dist-packages (from joeynmt==0.0.1) (46.1.3)\n", "Requirement already satisfied: torch>=1.1 in /usr/local/lib/python3.6/dist-packages (from joeynmt==0.0.1) (1.4.0)\n", "Requirement already satisfied: tensorflow>=1.14 in /usr/local/lib/python3.6/dist-packages (from joeynmt==0.0.1) (2.2.0rc2)\n", "Requirement already satisfied: torchtext in /usr/local/lib/python3.6/dist-packages (from joeynmt==0.0.1) (0.3.1)\n", "Collecting sacrebleu>=1.3.6\n", "\u001b[?25l Downloading https://files.pythonhosted.org/packages/f5/58/5c6cc352ea6271125325950715cf8b59b77abe5e93cf29f6e60b491a31d9/sacrebleu-1.4.6-py3-none-any.whl (59kB)\n", "\u001b[K |████████████████████████████████| 61kB 1.1MB/s \n", "\u001b[?25hCollecting subword-nmt\n", " Downloading https://files.pythonhosted.org/packages/74/60/6600a7bc09e7ab38bc53a48a20d8cae49b837f93f5842a41fe513a694912/subword_nmt-0.3.7-py2.py3-none-any.whl\n", "Requirement already satisfied: matplotlib in /usr/local/lib/python3.6/dist-packages (from joeynmt==0.0.1) (3.2.1)\n", "Requirement already satisfied: seaborn in /usr/local/lib/python3.6/dist-packages (from joeynmt==0.0.1) (0.10.0)\n", "Collecting pyyaml>=5.1\n", "\u001b[?25l Downloading https://files.pythonhosted.org/packages/64/c2/b80047c7ac2478f9501676c988a5411ed5572f35d1beff9cae07d321512c/PyYAML-5.3.1.tar.gz (269kB)\n", "\u001b[K |████████████████████████████████| 276kB 4.0MB/s \n", "\u001b[?25hCollecting pylint\n", "\u001b[?25l Downloading https://files.pythonhosted.org/packages/e9/59/43fc36c5ee316bb9aeb7cf5329cdbdca89e5749c34d5602753827c0aa2dc/pylint-2.4.4-py3-none-any.whl (302kB)\n", "\u001b[K |████████████████████████████████| 307kB 57.3MB/s \n", "\u001b[?25hRequirement already satisfied: six==1.12 in /usr/local/lib/python3.6/dist-packages (from joeynmt==0.0.1) (1.12.0)\n", "Collecting wrapt==1.11.1\n", " Downloading https://files.pythonhosted.org/packages/67/b2/0f71ca90b0ade7fad27e3d20327c996c6252a2ffe88f50a95bba7434eda9/wrapt-1.11.1.tar.gz\n", "Requirement already satisfied: tensorflow-estimator<2.3.0,>=2.2.0rc0 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14->joeynmt==0.0.1) (2.2.0rc0)\n", "Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14->joeynmt==0.0.1) (1.1.0)\n", "Requirement already satisfied: keras-preprocessing>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14->joeynmt==0.0.1) (1.1.0)\n", "Requirement already satisfied: astunparse==1.6.3 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14->joeynmt==0.0.1) (1.6.3)\n", "Requirement already satisfied: grpcio>=1.8.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14->joeynmt==0.0.1) (1.27.2)\n", "Requirement already satisfied: tensorboard<2.3.0,>=2.2.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14->joeynmt==0.0.1) (2.2.0)\n", "Requirement already satisfied: opt-einsum>=2.3.2 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14->joeynmt==0.0.1) (3.2.0)\n", "Requirement already satisfied: google-pasta>=0.1.8 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14->joeynmt==0.0.1) (0.2.0)\n", "Requirement already satisfied: absl-py>=0.7.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14->joeynmt==0.0.1) (0.9.0)\n", "Requirement already satisfied: h5py<2.11.0,>=2.10.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14->joeynmt==0.0.1) (2.10.0)\n", "Requirement already satisfied: protobuf>=3.8.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14->joeynmt==0.0.1) (3.10.0)\n", "Requirement already satisfied: wheel>=0.26; python_version >= \"3\" in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14->joeynmt==0.0.1) (0.34.2)\n", "Requirement already satisfied: scipy==1.4.1; python_version >= \"3\" in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14->joeynmt==0.0.1) (1.4.1)\n", "Requirement already satisfied: gast==0.3.3 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14->joeynmt==0.0.1) (0.3.3)\n", "Requirement already satisfied: requests in /usr/local/lib/python3.6/dist-packages (from torchtext->joeynmt==0.0.1) (2.21.0)\n", "Requirement already satisfied: tqdm in /usr/local/lib/python3.6/dist-packages (from torchtext->joeynmt==0.0.1) (4.38.0)\n", "Collecting mecab-python3\n", "\u001b[?25l Downloading https://files.pythonhosted.org/packages/18/49/b55a839a77189042960bf96490640c44816073f917d489acbc5d79fa5cc3/mecab_python3-0.996.5-cp36-cp36m-manylinux2010_x86_64.whl (17.1MB)\n", "\u001b[K |████████████████████████████████| 17.1MB 200kB/s \n", "\u001b[?25hCollecting portalocker\n", " Downloading https://files.pythonhosted.org/packages/64/03/9abfb3374d67838daf24f1a388528714bec1debb1d13749f0abd7fb07cfb/portalocker-1.6.0-py2.py3-none-any.whl\n", "Requirement already satisfied: typing in /usr/local/lib/python3.6/dist-packages (from sacrebleu>=1.3.6->joeynmt==0.0.1) (3.6.6)\n", "Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->joeynmt==0.0.1) (2.4.6)\n", "Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->joeynmt==0.0.1) (2.8.1)\n", "Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->joeynmt==0.0.1) (1.2.0)\n", "Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib->joeynmt==0.0.1) (0.10.0)\n", "Requirement already satisfied: pandas>=0.22.0 in /usr/local/lib/python3.6/dist-packages (from seaborn->joeynmt==0.0.1) (1.0.3)\n", "Collecting astroid<2.4,>=2.3.0\n", "\u001b[?25l Downloading https://files.pythonhosted.org/packages/ad/ae/86734823047962e7b8c8529186a1ac4a7ca19aaf1aa0c7713c022ef593fd/astroid-2.3.3-py3-none-any.whl (205kB)\n", "\u001b[K |████████████████████████████████| 215kB 61.3MB/s \n", "\u001b[?25hCollecting isort<5,>=4.2.5\n", "\u001b[?25l Downloading https://files.pythonhosted.org/packages/e5/b0/c121fd1fa3419ea9bfd55c7f9c4fedfec5143208d8c7ad3ce3db6c623c21/isort-4.3.21-py2.py3-none-any.whl (42kB)\n", "\u001b[K |████████████████████████████████| 51kB 7.8MB/s \n", "\u001b[?25hCollecting mccabe<0.7,>=0.6\n", " Downloading https://files.pythonhosted.org/packages/87/89/479dc97e18549e21354893e4ee4ef36db1d237534982482c3681ee6e7b57/mccabe-0.6.1-py2.py3-none-any.whl\n", "Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow>=1.14->joeynmt==0.0.1) (3.2.1)\n", "Requirement already satisfied: google-auth<2,>=1.6.3 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow>=1.14->joeynmt==0.0.1) (1.7.2)\n", "Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow>=1.14->joeynmt==0.0.1) (1.0.1)\n", "Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow>=1.14->joeynmt==0.0.1) (1.6.0.post2)\n", "Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow>=1.14->joeynmt==0.0.1) (0.4.1)\n", "Requirement already satisfied: urllib3<1.25,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests->torchtext->joeynmt==0.0.1) (1.24.3)\n", "Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests->torchtext->joeynmt==0.0.1) (3.0.4)\n", "Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests->torchtext->joeynmt==0.0.1) (2019.11.28)\n", "Requirement already satisfied: idna<2.9,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests->torchtext->joeynmt==0.0.1) (2.8)\n", "Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.22.0->seaborn->joeynmt==0.0.1) (2018.9)\n", "Collecting lazy-object-proxy==1.4.*\n", "\u001b[?25l Downloading https://files.pythonhosted.org/packages/0b/dd/b1e3407e9e6913cf178e506cd0dee818e58694d9a5cd1984e3f6a8b9a10f/lazy_object_proxy-1.4.3-cp36-cp36m-manylinux1_x86_64.whl (55kB)\n", "\u001b[K |████████████████████████████████| 61kB 8.6MB/s \n", "\u001b[?25hCollecting typed-ast<1.5,>=1.4.0; implementation_name == \"cpython\" and python_version < \"3.8\"\n", "\u001b[?25l Downloading https://files.pythonhosted.org/packages/90/ed/5459080d95eb87a02fe860d447197be63b6e2b5e9ff73c2b0a85622994f4/typed_ast-1.4.1-cp36-cp36m-manylinux1_x86_64.whl (737kB)\n", "\u001b[K |████████████████████████████████| 747kB 64.4MB/s \n", "\u001b[?25hRequirement already satisfied: cachetools<3.2,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard<2.3.0,>=2.2.0->tensorflow>=1.14->joeynmt==0.0.1) (3.1.1)\n", "Requirement already satisfied: rsa<4.1,>=3.1.4 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard<2.3.0,>=2.2.0->tensorflow>=1.14->joeynmt==0.0.1) (4.0)\n", "Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard<2.3.0,>=2.2.0->tensorflow>=1.14->joeynmt==0.0.1) (0.2.8)\n", "Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.6/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.3.0,>=2.2.0->tensorflow>=1.14->joeynmt==0.0.1) (1.3.0)\n", "Requirement already satisfied: pyasn1>=0.1.3 in /usr/local/lib/python3.6/dist-packages (from rsa<4.1,>=3.1.4->google-auth<2,>=1.6.3->tensorboard<2.3.0,>=2.2.0->tensorflow>=1.14->joeynmt==0.0.1) (0.4.8)\n", "Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.6/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.3.0,>=2.2.0->tensorflow>=1.14->joeynmt==0.0.1) (3.1.0)\n", "Building wheels for collected packages: joeynmt, pyyaml, wrapt\n", " Building wheel for joeynmt (setup.py) ... \u001b[?25l\u001b[?25hdone\n", " Created wheel for joeynmt: filename=joeynmt-0.0.1-cp36-none-any.whl size=73768 sha256=89928a71dba6299fa590b2e3aa35c718986d92c848776139e99d4db0c8e19bf3\n", " Stored in directory: /tmp/pip-ephem-wheel-cache-clor59d_/wheels/db/01/db/751cc9f3e7f6faec127c43644ba250a3ea7ad200594aeda70a\n", " Building wheel for pyyaml (setup.py) ... \u001b[?25l\u001b[?25hdone\n", " Created wheel for pyyaml: filename=PyYAML-5.3.1-cp36-cp36m-linux_x86_64.whl size=44621 sha256=2429b3effea1bb425377daef070f44a92967e98a656cc62766a78bb0b4b2b497\n", " Stored in directory: /root/.cache/pip/wheels/a7/c1/ea/cf5bd31012e735dc1dfea3131a2d5eae7978b251083d6247bd\n", " Building wheel for wrapt (setup.py) ... \u001b[?25l\u001b[?25hdone\n", " Created wheel for wrapt: filename=wrapt-1.11.1-cp36-cp36m-linux_x86_64.whl size=67430 sha256=61f829831a03970770d2c7b2bec42178fd22cc683c18885c204fa19b3a0cf6b1\n", " Stored in directory: /root/.cache/pip/wheels/89/67/41/63cbf0f6ac0a6156588b9587be4db5565f8c6d8ccef98202fc\n", "Successfully built joeynmt pyyaml wrapt\n", "Installing collected packages: mecab-python3, portalocker, sacrebleu, subword-nmt, pyyaml, wrapt, lazy-object-proxy, typed-ast, astroid, isort, mccabe, pylint, joeynmt\n", " Found existing installation: PyYAML 3.13\n", " Uninstalling PyYAML-3.13:\n", " Successfully uninstalled PyYAML-3.13\n", " Found existing installation: wrapt 1.12.1\n", " Uninstalling wrapt-1.12.1:\n", " Successfully uninstalled wrapt-1.12.1\n", "Successfully installed astroid-2.3.3 isort-4.3.21 joeynmt-0.0.1 lazy-object-proxy-1.4.3 mccabe-0.6.1 mecab-python3-0.996.5 portalocker-1.6.0 pylint-2.4.4 pyyaml-5.3.1 sacrebleu-1.4.6 subword-nmt-0.3.7 typed-ast-1.4.1 wrapt-1.11.1\n" ] } ], "source": [ "# Install JoeyNMT\n", "! git clone https://github.com/joeynmt/joeynmt.git\n", "! cd joeynmt; pip3 install ." ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "AaE77Tcppex9" }, "source": [ "# Preprocessing the Data into Subword BPE Tokens\n", "\n", "- One of the most powerful improvements for agglutinative languages (a feature of most Bantu languages) is using BPE tokenization [ (Sennrich, 2015) ](https://arxiv.org/abs/1508.07909).\n", "\n", "- It was also shown that by optimizing the umber of BPE codes we significantly improve results for low-resourced languages [(Sennrich, 2019)](https://www.aclweb.org/anthology/P19-1021) [(Martinus, 2019)](https://arxiv.org/abs/1906.05685)\n", "\n", "- Below we have the scripts for doing BPE tokenization of our data. We use 4000 tokens as recommended by [(Sennrich, 2019)](https://www.aclweb.org/anthology/P19-1021). You do not need to change anything. Simply running the below will be suitable. " ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 459 }, "colab_type": "code", "id": "H-TyjtmXB1mL", "outputId": "30ee4eff-3e72-4f7f-c0ac-f76f0dcf75b9" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "bpe.codes.4000\tdev.efi test.bpe.en test.en-any.en train.efi\n", "dev.bpe.efi\tdev.en\t test.efi\t train.bpe.efi train.en\n", "dev.bpe.en\ttest.bpe.efi test.en\t train.bpe.en\n", "1000.hyps 4000.hyps\t dev.efi\t test.bpe.en\t train.bpe.en\n", "2000.ckpt best.ckpt\t dev.en\t test.efi\t train.efi\n", "2000.hyps bpe.codes.4000 models\t test.en\t train.en\n", "3000.ckpt config.yaml\t src_vocab.txt test.en-any.en train.log\n", "3000.hyps dev.bpe.efi\t tensorboard\t test.en-any.en.1 trg_vocab.txt\n", "4000.ckpt dev.bpe.en\t test.bpe.efi train.bpe.efi validations.txt\n", "BPE Xhosa Sentences\n", "18 , 19 . ( a ) Didie ke nditọete ke esop mbufo ẹkeme ndin̄wam fi ada san̄asan̄a ?\n", "“ Ndi@@ tie n̄kere se Mme N̄ke 27 : 11 , Matthew 26 : 5@@ 2 , ye John 13 : 35 ẹdọhọde ama an̄wam mi nt@@ etịm mb@@ iere ke ndid@@ ụk@@ ke ekọn̄ .\n", "Mme itie N̄wed Abasi emi ama anam esịt ana mi sụn̄ ke ini afanikọn̄ emi . ” — A@@ nd@@ ri@@ y emi otode Uk@@ ra@@ ine .\n", "“ Isaiah 2 : 4 ama an̄wam mi n̄ka iso nda san̄asan̄a ke ini idomo .\n", "Mma n@@ tie n̄kere nte uwem ed@@ inem@@ de ke obufa ererimbot , ke ini mme owo mîdi@@ d@@ aha n̄kpọ@@ ekọn̄ iw@@ ot owo . ” — W@@ il@@ m@@ er emi otode C@@ olo@@ mb@@ ia .\n", "Combined BPE Vocab\n", "ō\n", "ι\n", "⁄\n", "◀\n", "ˋ@@\n", "/@@\n", "ā\n", "Α@@\n", "bless@@\n", ";@@\n" ] } ], "source": [ "#TODO: Skip for retrain\n", "# One of the huge boosts in NMT performance was to use a different method of tokenizing. \n", "# Usually, NMT would tokenize by words. However, using a method called BPE gave amazing boosts to performance\n", "\n", "# Do subword NMT\n", "from os import path\n", "os.environ[\"src\"] = source_language # Sets them in bash as well, since we often use bash scripts\n", "os.environ[\"tgt\"] = target_language\n", "\n", "# Learn BPEs on the training data.\n", "os.environ[\"data_path\"] = path.join(\"joeynmt\", \"data\", source_language + target_language) # Herman! \n", "! subword-nmt learn-joint-bpe-and-vocab --input train.$src train.$tgt -s 4000 -o bpe.codes.4000 --write-vocabulary vocab.$src vocab.$tgt\n", "\n", "# Apply BPE splits to the development and test data.\n", "! subword-nmt apply-bpe -c bpe.codes.4000 --vocabulary vocab.$src < train.$src > train.bpe.$src\n", "! subword-nmt apply-bpe -c bpe.codes.4000 --vocabulary vocab.$tgt < train.$tgt > train.bpe.$tgt\n", "\n", "! subword-nmt apply-bpe -c bpe.codes.4000 --vocabulary vocab.$src < dev.$src > dev.bpe.$src\n", "! subword-nmt apply-bpe -c bpe.codes.4000 --vocabulary vocab.$tgt < dev.$tgt > dev.bpe.$tgt\n", "! subword-nmt apply-bpe -c bpe.codes.4000 --vocabulary vocab.$src < test.$src > test.bpe.$src\n", "! subword-nmt apply-bpe -c bpe.codes.4000 --vocabulary vocab.$tgt < test.$tgt > test.bpe.$tgt\n", "\n", "# Create directory, move everyone we care about to the correct location\n", "! mkdir -p $data_path\n", "! cp train.* $data_path\n", "! cp test.* $data_path\n", "! cp dev.* $data_path\n", "! cp bpe.codes.4000 $data_path\n", "! ls $data_path\n", "\n", "# Also move everything we care about to a mounted location in google drive (relevant if running in colab) at gdrive_path\n", "! cp train.* \"$gdrive_path\"\n", "! cp test.* \"$gdrive_path\"\n", "! cp dev.* \"$gdrive_path\"\n", "! cp bpe.codes.4000 \"$gdrive_path\"\n", "! ls \"$gdrive_path\"\n", "\n", "# Create that vocab using build_vocab\n", "! sudo chmod 777 joeynmt/scripts/build_vocab.py\n", "! joeynmt/scripts/build_vocab.py joeynmt/data/$src$tgt/train.bpe.$src joeynmt/data/$src$tgt/train.bpe.$tgt --output_path \"$gdrive_path/vocab.txt\"\n", "\n", "# Some output\n", "! echo \"BPE Xhosa Sentences\"\n", "! tail -n 5 test.bpe.$tgt\n", "! echo \"Combined BPE Vocab\"\n", "! tail -n 10 \"$gdrive_path/vocab.txt\" # Herman" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "Ixmzi60WsUZ8" }, "source": [ "# Creating the JoeyNMT Config\n", "\n", "JoeyNMT requires a yaml config. We provide a template below. We've also set a number of defaults with it, that you may play with!\n", "\n", "- We used Transformer architecture \n", "- We set our dropout to reasonably high: 0.3 (recommended in [(Sennrich, 2019)](https://www.aclweb.org/anthology/P19-1021))\n", "\n", "Things worth playing with:\n", "- The batch size (also recommended to change for low-resourced languages)\n", "- The number of epochs (we've set it at 30 just so it runs in about an hour, for testing purposes)\n", "- The decoder options (beam_size, alpha)\n", "- Evaluation metrics (BLEU versus Crhf4)" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "Wc47fvWqyxbd" }, "outputs": [], "source": [ "def get_last_checkpoint(directory):\n", " last_checkpoint = ''\n", " try:\n", " for filename in os.listdir(directory):\n", " if not 'best' in filename and filename.endswith(\".ckpt\"):\n", " if not last_checkpoint or int(filename.split('.')[0]) > int(last_checkpoint.split('.')[0]):\n", " last_checkpoint = filename\n", " except FileNotFoundError as e:\n", " print('Error Occur ', e)\n", " return last_checkpoint" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 34 }, "colab_type": "code", "id": "x_ffEoFdy1Qo", "outputId": "b0bd7cd6-f1a5-4451-8ec1-bea975dfd14a" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Last checkpoint : 75000.ckpt\n" ] } ], "source": [ "# Copy the created models from the temporary storage to main storage on google drive for persistant storage \n", "# the content of te folder will be overwrite when you start trainin\n", "!cp -r \"/content/drive/My Drive/masakhane/model-temp/\"* \"$gdrive_path/models/${src}${tgt}_transformer/\"\n", "last_checkpoint = get_last_checkpoint(models_path)\n", "print('Last checkpoint :',last_checkpoint)" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "PIs1lY2hxMsl" }, "outputs": [], "source": [ "# This creates the config file for our JoeyNMT system. It might seem overwhelming so we've provided a couple of useful parameters you'll need to update\n", "# (You can of course play with all the parameters if you'd like!)\n", "\n", "name = '%s%s' % (source_language, target_language)\n", "gdrive_path = os.environ[\"gdrive_path\"]\n", "\n", "# Create the config\n", "config = \"\"\"\n", "name: \"{name}_transformer\"\n", "\n", "data:\n", " src: \"{source_language}\"\n", " trg: \"{target_language}\"\n", " train: \"{gdrive_path}/train.bpe\"\n", " dev: \"{gdrive_path}/dev.bpe\"\n", " test: \"{gdrive_path}/test.bpe\"\n", " level: \"bpe\"\n", " lowercase: False\n", " max_sent_length: 100\n", " src_vocab: \"{gdrive_path}/vocab.txt\"\n", " trg_vocab: \"{gdrive_path}/vocab.txt\"\n", "\n", "testing:\n", " beam_size: 5\n", " alpha: 1.0\n", "\n", "training:\n", " load_model: \"{gdrive_path}/models/{name}_transformer/{last_checkpoint}\" # TODO: uncommented to load a pre-trained model from last checkpoint\n", " random_seed: 42\n", " optimizer: \"adam\"\n", " normalization: \"tokens\"\n", " adam_betas: [0.9, 0.999] \n", " scheduling: \"plateau\" # TODO: try switching from plateau to Noam scheduling\n", " patience: 5 # For plateau: decrease learning rate by decrease_factor if validation score has not improved for this many validation rounds.\n", " learning_rate_factor: 0.5 # factor for Noam scheduler (used with Transformer)\n", " learning_rate_warmup: 1000 # warmup steps for Noam scheduler (used with Transformer)\n", " decrease_factor: 0.7\n", " loss: \"crossentropy\"\n", " learning_rate: 0.0003\n", " learning_rate_min: 0.00000001\n", " weight_decay: 0.0\n", " label_smoothing: 0.1\n", " batch_size: 4096\n", " batch_type: \"token\"\n", " eval_batch_size: 3600\n", " eval_batch_type: \"token\"\n", " batch_multiplier: 1\n", " early_stopping_metric: \"ppl\"\n", " epochs: 3 # TODO: Decrease for when playing around and checking of working. Around 30 is sufficient to check if its working at all\n", " validation_freq: 1000 # TODO: Set to at least once per epoch.\n", " logging_freq: 100\n", " eval_metric: \"bleu\"\n", " model_dir: \"{model_temp_dir}\"\n", " overwrite: True # TODO: Set to True if you want to overwrite possibly existing models. \n", " shuffle: True\n", " use_cuda: True\n", " max_output_length: 100\n", " print_valid_sents: [0, 1, 2, 3]\n", " keep_last_ckpts: 3\n", "\n", "model:\n", " initializer: \"xavier\"\n", " bias_initializer: \"zeros\"\n", " init_gain: 1.0\n", " embed_initializer: \"xavier\"\n", " embed_init_gain: 1.0\n", " tied_embeddings: True\n", " tied_softmax: True\n", " encoder:\n", " type: \"transformer\"\n", " num_layers: 6\n", " num_heads: 4 # TODO: Increase to 8 for larger data.\n", " embeddings:\n", " embedding_dim: 256 # TODO: Increase to 512 for larger data.\n", " scale: True\n", " dropout: 0.2\n", " # typically ff_size = 4 x hidden_size\n", " hidden_size: 256 # TODO: Increase to 512 for larger data.\n", " ff_size: 1024 # TODO: Increase to 2048 for larger data.\n", " dropout: 0.3\n", " decoder:\n", " type: \"transformer\"\n", " num_layers: 6\n", " num_heads: 4 # TODO: Increase to 8 for larger data.\n", " embeddings:\n", " embedding_dim: 256 # TODO: Increase to 512 for larger data.\n", " scale: True\n", " dropout: 0.2\n", " # typically ff_size = 4 x hidden_size\n", " hidden_size: 256 # TODO: Increase to 512 for larger data.\n", " ff_size: 1024 # TODO: Increase to 2048 for larger data.\n", " dropout: 0.3\n", "\"\"\".format(name=name, gdrive_path=os.environ[\"gdrive_path\"], source_language=source_language, target_language=target_language, model_temp_dir=model_temp_dir, last_checkpoint=last_checkpoint)\n", "with open(\"joeynmt/configs/transformer_{name}.yaml\".format(name=name),'w') as f:\n", " f.write(config)" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "pIifxE3Qzuvs" }, "source": [ "# Train the Model\n", "\n", "This single line of joeynmt runs the training using the config we made above" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 1000 }, "colab_type": "code", "id": "6ZBPFwT94WpI", "outputId": "ccce8245-45ef-4bd4-a81a-85fd57336ab4" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "2020-04-07 20:54:03,168 Hello! This is Joey-NMT.\n", "2020-04-07 20:54:03.318950: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\n", "2020-04-07 20:54:05,394 Total params: 12173824\n", "2020-04-07 20:54:05,395 Trainable parameters: ['decoder.layer_norm.bias', 'decoder.layer_norm.weight', 'decoder.layers.0.dec_layer_norm.bias', 'decoder.layers.0.dec_layer_norm.weight', 'decoder.layers.0.feed_forward.layer_norm.bias', 'decoder.layers.0.feed_forward.layer_norm.weight', 'decoder.layers.0.feed_forward.pwff_layer.0.bias', 'decoder.layers.0.feed_forward.pwff_layer.0.weight', 'decoder.layers.0.feed_forward.pwff_layer.3.bias', 'decoder.layers.0.feed_forward.pwff_layer.3.weight', 'decoder.layers.0.src_trg_att.k_layer.bias', 'decoder.layers.0.src_trg_att.k_layer.weight', 'decoder.layers.0.src_trg_att.output_layer.bias', 'decoder.layers.0.src_trg_att.output_layer.weight', 'decoder.layers.0.src_trg_att.q_layer.bias', 'decoder.layers.0.src_trg_att.q_layer.weight', 'decoder.layers.0.src_trg_att.v_layer.bias', 'decoder.layers.0.src_trg_att.v_layer.weight', 'decoder.layers.0.trg_trg_att.k_layer.bias', 'decoder.layers.0.trg_trg_att.k_layer.weight', 'decoder.layers.0.trg_trg_att.output_layer.bias', 'decoder.layers.0.trg_trg_att.output_layer.weight', 'decoder.layers.0.trg_trg_att.q_layer.bias', 'decoder.layers.0.trg_trg_att.q_layer.weight', 'decoder.layers.0.trg_trg_att.v_layer.bias', 'decoder.layers.0.trg_trg_att.v_layer.weight', 'decoder.layers.0.x_layer_norm.bias', 'decoder.layers.0.x_layer_norm.weight', 'decoder.layers.1.dec_layer_norm.bias', 'decoder.layers.1.dec_layer_norm.weight', 'decoder.layers.1.feed_forward.layer_norm.bias', 'decoder.layers.1.feed_forward.layer_norm.weight', 'decoder.layers.1.feed_forward.pwff_layer.0.bias', 'decoder.layers.1.feed_forward.pwff_layer.0.weight', 'decoder.layers.1.feed_forward.pwff_layer.3.bias', 'decoder.layers.1.feed_forward.pwff_layer.3.weight', 'decoder.layers.1.src_trg_att.k_layer.bias', 'decoder.layers.1.src_trg_att.k_layer.weight', 'decoder.layers.1.src_trg_att.output_layer.bias', 'decoder.layers.1.src_trg_att.output_layer.weight', 'decoder.layers.1.src_trg_att.q_layer.bias', 'decoder.layers.1.src_trg_att.q_layer.weight', 'decoder.layers.1.src_trg_att.v_layer.bias', 'decoder.layers.1.src_trg_att.v_layer.weight', 'decoder.layers.1.trg_trg_att.k_layer.bias', 'decoder.layers.1.trg_trg_att.k_layer.weight', 'decoder.layers.1.trg_trg_att.output_layer.bias', 'decoder.layers.1.trg_trg_att.output_layer.weight', 'decoder.layers.1.trg_trg_att.q_layer.bias', 'decoder.layers.1.trg_trg_att.q_layer.weight', 'decoder.layers.1.trg_trg_att.v_layer.bias', 'decoder.layers.1.trg_trg_att.v_layer.weight', 'decoder.layers.1.x_layer_norm.bias', 'decoder.layers.1.x_layer_norm.weight', 'decoder.layers.2.dec_layer_norm.bias', 'decoder.layers.2.dec_layer_norm.weight', 'decoder.layers.2.feed_forward.layer_norm.bias', 'decoder.layers.2.feed_forward.layer_norm.weight', 'decoder.layers.2.feed_forward.pwff_layer.0.bias', 'decoder.layers.2.feed_forward.pwff_layer.0.weight', 'decoder.layers.2.feed_forward.pwff_layer.3.bias', 'decoder.layers.2.feed_forward.pwff_layer.3.weight', 'decoder.layers.2.src_trg_att.k_layer.bias', 'decoder.layers.2.src_trg_att.k_layer.weight', 'decoder.layers.2.src_trg_att.output_layer.bias', 'decoder.layers.2.src_trg_att.output_layer.weight', 'decoder.layers.2.src_trg_att.q_layer.bias', 'decoder.layers.2.src_trg_att.q_layer.weight', 'decoder.layers.2.src_trg_att.v_layer.bias', 'decoder.layers.2.src_trg_att.v_layer.weight', 'decoder.layers.2.trg_trg_att.k_layer.bias', 'decoder.layers.2.trg_trg_att.k_layer.weight', 'decoder.layers.2.trg_trg_att.output_layer.bias', 'decoder.layers.2.trg_trg_att.output_layer.weight', 'decoder.layers.2.trg_trg_att.q_layer.bias', 'decoder.layers.2.trg_trg_att.q_layer.weight', 'decoder.layers.2.trg_trg_att.v_layer.bias', 'decoder.layers.2.trg_trg_att.v_layer.weight', 'decoder.layers.2.x_layer_norm.bias', 'decoder.layers.2.x_layer_norm.weight', 'decoder.layers.3.dec_layer_norm.bias', 'decoder.layers.3.dec_layer_norm.weight', 'decoder.layers.3.feed_forward.layer_norm.bias', 'decoder.layers.3.feed_forward.layer_norm.weight', 'decoder.layers.3.feed_forward.pwff_layer.0.bias', 'decoder.layers.3.feed_forward.pwff_layer.0.weight', 'decoder.layers.3.feed_forward.pwff_layer.3.bias', 'decoder.layers.3.feed_forward.pwff_layer.3.weight', 'decoder.layers.3.src_trg_att.k_layer.bias', 'decoder.layers.3.src_trg_att.k_layer.weight', 'decoder.layers.3.src_trg_att.output_layer.bias', 'decoder.layers.3.src_trg_att.output_layer.weight', 'decoder.layers.3.src_trg_att.q_layer.bias', 'decoder.layers.3.src_trg_att.q_layer.weight', 'decoder.layers.3.src_trg_att.v_layer.bias', 'decoder.layers.3.src_trg_att.v_layer.weight', 'decoder.layers.3.trg_trg_att.k_layer.bias', 'decoder.layers.3.trg_trg_att.k_layer.weight', 'decoder.layers.3.trg_trg_att.output_layer.bias', 'decoder.layers.3.trg_trg_att.output_layer.weight', 'decoder.layers.3.trg_trg_att.q_layer.bias', 'decoder.layers.3.trg_trg_att.q_layer.weight', 'decoder.layers.3.trg_trg_att.v_layer.bias', 'decoder.layers.3.trg_trg_att.v_layer.weight', 'decoder.layers.3.x_layer_norm.bias', 'decoder.layers.3.x_layer_norm.weight', 'decoder.layers.4.dec_layer_norm.bias', 'decoder.layers.4.dec_layer_norm.weight', 'decoder.layers.4.feed_forward.layer_norm.bias', 'decoder.layers.4.feed_forward.layer_norm.weight', 'decoder.layers.4.feed_forward.pwff_layer.0.bias', 'decoder.layers.4.feed_forward.pwff_layer.0.weight', 'decoder.layers.4.feed_forward.pwff_layer.3.bias', 'decoder.layers.4.feed_forward.pwff_layer.3.weight', 'decoder.layers.4.src_trg_att.k_layer.bias', 'decoder.layers.4.src_trg_att.k_layer.weight', 'decoder.layers.4.src_trg_att.output_layer.bias', 'decoder.layers.4.src_trg_att.output_layer.weight', 'decoder.layers.4.src_trg_att.q_layer.bias', 'decoder.layers.4.src_trg_att.q_layer.weight', 'decoder.layers.4.src_trg_att.v_layer.bias', 'decoder.layers.4.src_trg_att.v_layer.weight', 'decoder.layers.4.trg_trg_att.k_layer.bias', 'decoder.layers.4.trg_trg_att.k_layer.weight', 'decoder.layers.4.trg_trg_att.output_layer.bias', 'decoder.layers.4.trg_trg_att.output_layer.weight', 'decoder.layers.4.trg_trg_att.q_layer.bias', 'decoder.layers.4.trg_trg_att.q_layer.weight', 'decoder.layers.4.trg_trg_att.v_layer.bias', 'decoder.layers.4.trg_trg_att.v_layer.weight', 'decoder.layers.4.x_layer_norm.bias', 'decoder.layers.4.x_layer_norm.weight', 'decoder.layers.5.dec_layer_norm.bias', 'decoder.layers.5.dec_layer_norm.weight', 'decoder.layers.5.feed_forward.layer_norm.bias', 'decoder.layers.5.feed_forward.layer_norm.weight', 'decoder.layers.5.feed_forward.pwff_layer.0.bias', 'decoder.layers.5.feed_forward.pwff_layer.0.weight', 'decoder.layers.5.feed_forward.pwff_layer.3.bias', 'decoder.layers.5.feed_forward.pwff_layer.3.weight', 'decoder.layers.5.src_trg_att.k_layer.bias', 'decoder.layers.5.src_trg_att.k_layer.weight', 'decoder.layers.5.src_trg_att.output_layer.bias', 'decoder.layers.5.src_trg_att.output_layer.weight', 'decoder.layers.5.src_trg_att.q_layer.bias', 'decoder.layers.5.src_trg_att.q_layer.weight', 'decoder.layers.5.src_trg_att.v_layer.bias', 'decoder.layers.5.src_trg_att.v_layer.weight', 'decoder.layers.5.trg_trg_att.k_layer.bias', 'decoder.layers.5.trg_trg_att.k_layer.weight', 'decoder.layers.5.trg_trg_att.output_layer.bias', 'decoder.layers.5.trg_trg_att.output_layer.weight', 'decoder.layers.5.trg_trg_att.q_layer.bias', 'decoder.layers.5.trg_trg_att.q_layer.weight', 'decoder.layers.5.trg_trg_att.v_layer.bias', 'decoder.layers.5.trg_trg_att.v_layer.weight', 'decoder.layers.5.x_layer_norm.bias', 'decoder.layers.5.x_layer_norm.weight', 'encoder.layer_norm.bias', 'encoder.layer_norm.weight', 'encoder.layers.0.feed_forward.layer_norm.bias', 'encoder.layers.0.feed_forward.layer_norm.weight', 'encoder.layers.0.feed_forward.pwff_layer.0.bias', 'encoder.layers.0.feed_forward.pwff_layer.0.weight', 'encoder.layers.0.feed_forward.pwff_layer.3.bias', 'encoder.layers.0.feed_forward.pwff_layer.3.weight', 'encoder.layers.0.layer_norm.bias', 'encoder.layers.0.layer_norm.weight', 'encoder.layers.0.src_src_att.k_layer.bias', 'encoder.layers.0.src_src_att.k_layer.weight', 'encoder.layers.0.src_src_att.output_layer.bias', 'encoder.layers.0.src_src_att.output_layer.weight', 'encoder.layers.0.src_src_att.q_layer.bias', 'encoder.layers.0.src_src_att.q_layer.weight', 'encoder.layers.0.src_src_att.v_layer.bias', 'encoder.layers.0.src_src_att.v_layer.weight', 'encoder.layers.1.feed_forward.layer_norm.bias', 'encoder.layers.1.feed_forward.layer_norm.weight', 'encoder.layers.1.feed_forward.pwff_layer.0.bias', 'encoder.layers.1.feed_forward.pwff_layer.0.weight', 'encoder.layers.1.feed_forward.pwff_layer.3.bias', 'encoder.layers.1.feed_forward.pwff_layer.3.weight', 'encoder.layers.1.layer_norm.bias', 'encoder.layers.1.layer_norm.weight', 'encoder.layers.1.src_src_att.k_layer.bias', 'encoder.layers.1.src_src_att.k_layer.weight', 'encoder.layers.1.src_src_att.output_layer.bias', 'encoder.layers.1.src_src_att.output_layer.weight', 'encoder.layers.1.src_src_att.q_layer.bias', 'encoder.layers.1.src_src_att.q_layer.weight', 'encoder.layers.1.src_src_att.v_layer.bias', 'encoder.layers.1.src_src_att.v_layer.weight', 'encoder.layers.2.feed_forward.layer_norm.bias', 'encoder.layers.2.feed_forward.layer_norm.weight', 'encoder.layers.2.feed_forward.pwff_layer.0.bias', 'encoder.layers.2.feed_forward.pwff_layer.0.weight', 'encoder.layers.2.feed_forward.pwff_layer.3.bias', 'encoder.layers.2.feed_forward.pwff_layer.3.weight', 'encoder.layers.2.layer_norm.bias', 'encoder.layers.2.layer_norm.weight', 'encoder.layers.2.src_src_att.k_layer.bias', 'encoder.layers.2.src_src_att.k_layer.weight', 'encoder.layers.2.src_src_att.output_layer.bias', 'encoder.layers.2.src_src_att.output_layer.weight', 'encoder.layers.2.src_src_att.q_layer.bias', 'encoder.layers.2.src_src_att.q_layer.weight', 'encoder.layers.2.src_src_att.v_layer.bias', 'encoder.layers.2.src_src_att.v_layer.weight', 'encoder.layers.3.feed_forward.layer_norm.bias', 'encoder.layers.3.feed_forward.layer_norm.weight', 'encoder.layers.3.feed_forward.pwff_layer.0.bias', 'encoder.layers.3.feed_forward.pwff_layer.0.weight', 'encoder.layers.3.feed_forward.pwff_layer.3.bias', 'encoder.layers.3.feed_forward.pwff_layer.3.weight', 'encoder.layers.3.layer_norm.bias', 'encoder.layers.3.layer_norm.weight', 'encoder.layers.3.src_src_att.k_layer.bias', 'encoder.layers.3.src_src_att.k_layer.weight', 'encoder.layers.3.src_src_att.output_layer.bias', 'encoder.layers.3.src_src_att.output_layer.weight', 'encoder.layers.3.src_src_att.q_layer.bias', 'encoder.layers.3.src_src_att.q_layer.weight', 'encoder.layers.3.src_src_att.v_layer.bias', 'encoder.layers.3.src_src_att.v_layer.weight', 'encoder.layers.4.feed_forward.layer_norm.bias', 'encoder.layers.4.feed_forward.layer_norm.weight', 'encoder.layers.4.feed_forward.pwff_layer.0.bias', 'encoder.layers.4.feed_forward.pwff_layer.0.weight', 'encoder.layers.4.feed_forward.pwff_layer.3.bias', 'encoder.layers.4.feed_forward.pwff_layer.3.weight', 'encoder.layers.4.layer_norm.bias', 'encoder.layers.4.layer_norm.weight', 'encoder.layers.4.src_src_att.k_layer.bias', 'encoder.layers.4.src_src_att.k_layer.weight', 'encoder.layers.4.src_src_att.output_layer.bias', 'encoder.layers.4.src_src_att.output_layer.weight', 'encoder.layers.4.src_src_att.q_layer.bias', 'encoder.layers.4.src_src_att.q_layer.weight', 'encoder.layers.4.src_src_att.v_layer.bias', 'encoder.layers.4.src_src_att.v_layer.weight', 'encoder.layers.5.feed_forward.layer_norm.bias', 'encoder.layers.5.feed_forward.layer_norm.weight', 'encoder.layers.5.feed_forward.pwff_layer.0.bias', 'encoder.layers.5.feed_forward.pwff_layer.0.weight', 'encoder.layers.5.feed_forward.pwff_layer.3.bias', 'encoder.layers.5.feed_forward.pwff_layer.3.weight', 'encoder.layers.5.layer_norm.bias', 'encoder.layers.5.layer_norm.weight', 'encoder.layers.5.src_src_att.k_layer.bias', 'encoder.layers.5.src_src_att.k_layer.weight', 'encoder.layers.5.src_src_att.output_layer.bias', 'encoder.layers.5.src_src_att.output_layer.weight', 'encoder.layers.5.src_src_att.q_layer.bias', 'encoder.layers.5.src_src_att.q_layer.weight', 'encoder.layers.5.src_src_att.v_layer.bias', 'encoder.layers.5.src_src_att.v_layer.weight', 'src_embed.lut.weight']\n", "2020-04-07 20:54:20,764 Loading model from /content/drive/My Drive/masakhane/en-efi-baseline/models/enefi_transformer/75000.ckpt\n", "2020-04-07 20:54:21,100 cfg.name : enefi_transformer\n", "2020-04-07 20:54:21,100 cfg.data.src : en\n", "2020-04-07 20:54:21,101 cfg.data.trg : efi\n", "2020-04-07 20:54:21,101 cfg.data.train : /content/drive/My Drive/masakhane/en-efi-baseline/train.bpe\n", "2020-04-07 20:54:21,101 cfg.data.dev : /content/drive/My Drive/masakhane/en-efi-baseline/dev.bpe\n", "2020-04-07 20:54:21,101 cfg.data.test : /content/drive/My Drive/masakhane/en-efi-baseline/test.bpe\n", "2020-04-07 20:54:21,101 cfg.data.level : bpe\n", "2020-04-07 20:54:21,101 cfg.data.lowercase : False\n", "2020-04-07 20:54:21,101 cfg.data.max_sent_length : 100\n", "2020-04-07 20:54:21,102 cfg.data.src_vocab : /content/drive/My Drive/masakhane/en-efi-baseline/vocab.txt\n", "2020-04-07 20:54:21,102 cfg.data.trg_vocab : /content/drive/My Drive/masakhane/en-efi-baseline/vocab.txt\n", "2020-04-07 20:54:21,102 cfg.testing.beam_size : 5\n", "2020-04-07 20:54:21,102 cfg.testing.alpha : 1.0\n", "2020-04-07 20:54:21,102 cfg.training.load_model : /content/drive/My Drive/masakhane/en-efi-baseline/models/enefi_transformer/75000.ckpt\n", "2020-04-07 20:54:21,102 cfg.training.random_seed : 42\n", "2020-04-07 20:54:21,102 cfg.training.optimizer : adam\n", "2020-04-07 20:54:21,102 cfg.training.normalization : tokens\n", "2020-04-07 20:54:21,103 cfg.training.adam_betas : [0.9, 0.999]\n", "2020-04-07 20:54:21,103 cfg.training.scheduling : plateau\n", "2020-04-07 20:54:21,103 cfg.training.patience : 5\n", "2020-04-07 20:54:21,103 cfg.training.learning_rate_factor : 0.5\n", "2020-04-07 20:54:21,103 cfg.training.learning_rate_warmup : 1000\n", "2020-04-07 20:54:21,103 cfg.training.decrease_factor : 0.7\n", "2020-04-07 20:54:21,103 cfg.training.loss : crossentropy\n", "2020-04-07 20:54:21,103 cfg.training.learning_rate : 0.0003\n", "2020-04-07 20:54:21,103 cfg.training.learning_rate_min : 1e-08\n", "2020-04-07 20:54:21,104 cfg.training.weight_decay : 0.0\n", "2020-04-07 20:54:21,104 cfg.training.label_smoothing : 0.1\n", "2020-04-07 20:54:21,104 cfg.training.batch_size : 4096\n", "2020-04-07 20:54:21,104 cfg.training.batch_type : token\n", "2020-04-07 20:54:21,104 cfg.training.eval_batch_size : 3600\n", "2020-04-07 20:54:21,104 cfg.training.eval_batch_type : token\n", "2020-04-07 20:54:21,104 cfg.training.batch_multiplier : 1\n", "2020-04-07 20:54:21,105 cfg.training.early_stopping_metric : ppl\n", "2020-04-07 20:54:21,105 cfg.training.epochs : 3\n", "2020-04-07 20:54:21,105 cfg.training.validation_freq : 1000\n", "2020-04-07 20:54:21,105 cfg.training.logging_freq : 100\n", "2020-04-07 20:54:21,105 cfg.training.eval_metric : bleu\n", "2020-04-07 20:54:21,105 cfg.training.model_dir : /content/drive/My Drive/masakhane/model-temp\n", "2020-04-07 20:54:21,105 cfg.training.overwrite : True\n", "2020-04-07 20:54:21,105 cfg.training.shuffle : True\n", "2020-04-07 20:54:21,106 cfg.training.use_cuda : True\n", "2020-04-07 20:54:21,106 cfg.training.max_output_length : 100\n", "2020-04-07 20:54:21,106 cfg.training.print_valid_sents : [0, 1, 2, 3]\n", "2020-04-07 20:54:21,106 cfg.training.keep_last_ckpts : 3\n", "2020-04-07 20:54:21,106 cfg.model.initializer : xavier\n", "2020-04-07 20:54:21,106 cfg.model.bias_initializer : zeros\n", "2020-04-07 20:54:21,106 cfg.model.init_gain : 1.0\n", "2020-04-07 20:54:21,106 cfg.model.embed_initializer : xavier\n", "2020-04-07 20:54:21,106 cfg.model.embed_init_gain : 1.0\n", "2020-04-07 20:54:21,107 cfg.model.tied_embeddings : True\n", "2020-04-07 20:54:21,107 cfg.model.tied_softmax : True\n", "2020-04-07 20:54:21,107 cfg.model.encoder.type : transformer\n", "2020-04-07 20:54:21,107 cfg.model.encoder.num_layers : 6\n", "2020-04-07 20:54:21,107 cfg.model.encoder.num_heads : 4\n", "2020-04-07 20:54:21,107 cfg.model.encoder.embeddings.embedding_dim : 256\n", "2020-04-07 20:54:21,107 cfg.model.encoder.embeddings.scale : True\n", "2020-04-07 20:54:21,107 cfg.model.encoder.embeddings.dropout : 0.2\n", "2020-04-07 20:54:21,108 cfg.model.encoder.hidden_size : 256\n", "2020-04-07 20:54:21,108 cfg.model.encoder.ff_size : 1024\n", "2020-04-07 20:54:21,108 cfg.model.encoder.dropout : 0.3\n", "2020-04-07 20:54:21,108 cfg.model.decoder.type : transformer\n", "2020-04-07 20:54:21,108 cfg.model.decoder.num_layers : 6\n", "2020-04-07 20:54:21,108 cfg.model.decoder.num_heads : 4\n", "2020-04-07 20:54:21,108 cfg.model.decoder.embeddings.embedding_dim : 256\n", "2020-04-07 20:54:21,108 cfg.model.decoder.embeddings.scale : True\n", "2020-04-07 20:54:21,108 cfg.model.decoder.embeddings.dropout : 0.2\n", "2020-04-07 20:54:21,109 cfg.model.decoder.hidden_size : 256\n", "2020-04-07 20:54:21,109 cfg.model.decoder.ff_size : 1024\n", "2020-04-07 20:54:21,109 cfg.model.decoder.dropout : 0.3\n", "2020-04-07 20:54:21,109 Data set sizes: \n", "\ttrain 334651,\n", "\tvalid 1000,\n", "\ttest 2675\n", "2020-04-07 20:54:21,109 First training example:\n", "\t[SRC] R@@ ef@@ er@@ ring to what the rul@@ er@@ ship of God’s Son will accompl@@ ish , Isaiah 9 : 7 says : “ The very z@@ eal of Jehovah of ar@@ mi@@ es will do this . ”\n", "\t[TRG] Isaiah 9 : 7 ọd@@ ọh@@ o ke Eyen Abasi edidi Edidem ye nte ke enye ayanam ediwak nti n̄kpọ ọnọ ubonowo . “ I@@ f@@ ịk Jehovah mme udịm edinam emi . ”\n", "2020-04-07 20:54:21,109 First 10 words (src): (0) (1) (2) (3) (4) . (5) , (6) ke (7) the (8) to (9) of\n", "2020-04-07 20:54:21,109 First 10 words (trg): (0) (1) (2) (3) (4) . (5) , (6) ke (7) the (8) to (9) of\n", "2020-04-07 20:54:21,110 Number of Src words (types): 4350\n", "2020-04-07 20:54:21,110 Number of Trg words (types): 4350\n", "2020-04-07 20:54:21,110 Model(\n", "\tencoder=TransformerEncoder(num_layers=6, num_heads=4),\n", "\tdecoder=TransformerDecoder(num_layers=6, num_heads=4),\n", "\tsrc_embed=Embeddings(embedding_dim=256, vocab_size=4350),\n", "\ttrg_embed=Embeddings(embedding_dim=256, vocab_size=4350))\n", "2020-04-07 20:54:21,253 EPOCH 1\n", "2020-04-07 20:54:33,171 Epoch 1 Step: 75100 Batch Loss: 1.573272 Tokens per Sec: 19671, Lr: 0.000300\n", "2020-04-07 20:54:44,301 Epoch 1 Step: 75200 Batch Loss: 1.599319 Tokens per Sec: 20553, Lr: 0.000300\n", "2020-04-07 20:54:55,456 Epoch 1 Step: 75300 Batch Loss: 1.966765 Tokens per Sec: 20017, Lr: 0.000300\n", "2020-04-07 20:55:06,573 Epoch 1 Step: 75400 Batch Loss: 1.750993 Tokens per Sec: 20776, Lr: 0.000300\n", "2020-04-07 20:55:17,659 Epoch 1 Step: 75500 Batch Loss: 1.297595 Tokens per Sec: 20773, Lr: 0.000300\n", "2020-04-07 20:55:28,903 Epoch 1 Step: 75600 Batch Loss: 1.379848 Tokens per Sec: 20993, Lr: 0.000300\n", "2020-04-07 20:55:40,231 Epoch 1 Step: 75700 Batch Loss: 1.868639 Tokens per Sec: 20789, Lr: 0.000300\n", "2020-04-07 20:55:51,380 Epoch 1 Step: 75800 Batch Loss: 1.783921 Tokens per Sec: 20867, Lr: 0.000300\n", "2020-04-07 20:56:02,480 Epoch 1 Step: 75900 Batch Loss: 1.731708 Tokens per Sec: 20425, Lr: 0.000300\n", "2020-04-07 20:56:13,727 Epoch 1 Step: 76000 Batch Loss: 1.620422 Tokens per Sec: 20738, Lr: 0.000300\n", "2020-04-07 20:56:26,413 Example #0\n", "2020-04-07 20:56:26,414 \tSource: If you do , you will be choosing the best possible way of life .\n", "2020-04-07 20:56:26,414 \tReference: Edieke anamde ntre , ọwọrọ ke ememek mfọnn̄kan usụn̄ uwem .\n", "2020-04-07 20:56:26,414 \tHypothesis: Edieke anamde emi , afo oyoyom mfọnn̄kan usụn̄ uwem .\n", "2020-04-07 20:56:26,414 Example #1\n", "2020-04-07 20:56:26,415 \tSource: They may even have been told as much by a clergyman .\n", "2020-04-07 20:56:26,415 \tReference: Akam ekeme ndidi se ọkwọrọ ederi eketịn̄de ọnọ mmọ edi oro .\n", "2020-04-07 20:56:26,415 \tHypothesis: Ekeme ndidi mme ọkwọrọ ederi ẹma ẹkam ẹdọhọ mmọ ke mmọ ẹma ẹkam ẹtịn̄ ẹban̄a mmimọ .\n", "2020-04-07 20:56:26,415 Example #2\n", "2020-04-07 20:56:26,416 \tSource: The same point is made at 2 Chronicles 5 : 9 .\n", "2020-04-07 20:56:26,416 \tReference: Ẹtịn̄ ukem ikọ oro ke 2 Chronicles 5 : 9 .\n", "2020-04-07 20:56:26,416 \tHypothesis: Ẹnam ukem n̄kpọ oro ke 2 Chronicles 5 : 9 .\n", "2020-04-07 20:56:26,416 Example #3\n", "2020-04-07 20:56:26,416 \tSource: 59 - 61 C.E . ) , and from there he finds ways to preach about the Kingdom and teach “ the things concerning the Lord Jesus Christ . ” ​ — Acts 28 : 30 , 31 .\n", "2020-04-07 20:56:26,416 \tReference: Ẹkọbi Paul ẹtem ke ufọk esie ke Rome ke isua iba ( ke n̄kpọ nte isua 59 esịm 61 E.N . ) , ndien enye oyom usụn̄ do ọkwọrọ Obio Ubọn̄ onyụn̄ ekpep mbon en̄wen “ mme n̄kpọ emi ẹban̄ade Ọbọn̄ Jesus Christ . ” — Utom 28 : 30 , 31 .\n", "2020-04-07 20:56:26,416 \tHypothesis: 59 - 61 E.N . ) , ndien enye okụt usụn̄ ndikwọrọ mban̄a Obio Ubọn̄ nnyụn̄ n̄kpep “ mme n̄kpọ ẹban̄ade Ọbọn̄ Jesus Christ . ” — Utom 28 : 30 , 31 .\n", "2020-04-07 20:56:26,417 Validation result (greedy) at epoch 1, step 76000: bleu: 30.37, loss: 36773.7344, ppl: 4.7829, duration: 12.6896s\n", "2020-04-07 20:56:37,662 Epoch 1 Step: 76100 Batch Loss: 1.741713 Tokens per Sec: 20903, Lr: 0.000300\n", "2020-04-07 20:56:48,936 Epoch 1 Step: 76200 Batch Loss: 1.694483 Tokens per Sec: 20845, Lr: 0.000300\n", "2020-04-07 20:57:00,065 Epoch 1 Step: 76300 Batch Loss: 1.946359 Tokens per Sec: 20531, Lr: 0.000300\n", "2020-04-07 20:57:11,188 Epoch 1 Step: 76400 Batch Loss: 1.772593 Tokens per Sec: 20213, Lr: 0.000300\n", "2020-04-07 20:57:22,437 Epoch 1 Step: 76500 Batch Loss: 1.839959 Tokens per Sec: 20427, Lr: 0.000300\n", "2020-04-07 20:57:33,594 Epoch 1 Step: 76600 Batch Loss: 1.706491 Tokens per Sec: 20898, Lr: 0.000300\n", "2020-04-07 20:57:44,843 Epoch 1 Step: 76700 Batch Loss: 1.665444 Tokens per Sec: 20739, Lr: 0.000300\n", "2020-04-07 20:57:56,066 Epoch 1 Step: 76800 Batch Loss: 1.606557 Tokens per Sec: 20781, Lr: 0.000300\n", "2020-04-07 20:58:07,212 Epoch 1 Step: 76900 Batch Loss: 1.435567 Tokens per Sec: 20546, Lr: 0.000300\n", "2020-04-07 20:58:18,399 Epoch 1 Step: 77000 Batch Loss: 1.803759 Tokens per Sec: 20899, Lr: 0.000300\n", "2020-04-07 20:58:29,894 Example #0\n", "2020-04-07 20:58:29,895 \tSource: If you do , you will be choosing the best possible way of life .\n", "2020-04-07 20:58:29,895 \tReference: Edieke anamde ntre , ọwọrọ ke ememek mfọnn̄kan usụn̄ uwem .\n", "2020-04-07 20:58:29,895 \tHypothesis: Edieke anamde emi , afo eyemek mfọnn̄kan usụn̄ uwem .\n", "2020-04-07 20:58:29,895 Example #1\n", "2020-04-07 20:58:29,896 \tSource: They may even have been told as much by a clergyman .\n", "2020-04-07 20:58:29,896 \tReference: Akam ekeme ndidi se ọkwọrọ ederi eketịn̄de ọnọ mmọ edi oro .\n", "2020-04-07 20:58:29,896 \tHypothesis: Ekeme ndidi mme ọkwọrọ ederi ẹma ẹkam ẹdọhọ mmọ ke mmimọ imọn̄ itịn̄ iban̄a mmọ .\n", "2020-04-07 20:58:29,896 Example #2\n", "2020-04-07 20:58:29,896 \tSource: The same point is made at 2 Chronicles 5 : 9 .\n", "2020-04-07 20:58:29,897 \tReference: Ẹtịn̄ ukem ikọ oro ke 2 Chronicles 5 : 9 .\n", "2020-04-07 20:58:29,897 \tHypothesis: Ẹnam ukem n̄kpọ oro ke 2 Chronicles 5 : 9 .\n", "2020-04-07 20:58:29,897 Example #3\n", "2020-04-07 20:58:29,897 \tSource: 59 - 61 C.E . ) , and from there he finds ways to preach about the Kingdom and teach “ the things concerning the Lord Jesus Christ . ” ​ — Acts 28 : 30 , 31 .\n", "2020-04-07 20:58:29,897 \tReference: Ẹkọbi Paul ẹtem ke ufọk esie ke Rome ke isua iba ( ke n̄kpọ nte isua 59 esịm 61 E.N . ) , ndien enye oyom usụn̄ do ọkwọrọ Obio Ubọn̄ onyụn̄ ekpep mbon en̄wen “ mme n̄kpọ emi ẹban̄ade Ọbọn̄ Jesus Christ . ” — Utom 28 : 30 , 31 .\n", "2020-04-07 20:58:29,897 \tHypothesis: 59 - 61 E.N . ) , ndien enye okụt usụn̄ ndikwọrọ mban̄a Obio Ubọn̄ nnyụn̄ n̄kpep “ mme n̄kpọ ẹban̄ade Ọbọn̄ Jesus Christ . ” — Utom 28 : 30 , 31 .\n", "2020-04-07 20:58:29,898 Validation result (greedy) at epoch 1, step 77000: bleu: 30.70, loss: 36633.9766, ppl: 4.7545, duration: 11.4985s\n", "2020-04-07 20:58:41,130 Epoch 1 Step: 77100 Batch Loss: 1.740518 Tokens per Sec: 20146, Lr: 0.000300\n", "2020-04-07 20:58:52,284 Epoch 1 Step: 77200 Batch Loss: 1.691751 Tokens per Sec: 20650, Lr: 0.000300\n", "2020-04-07 20:59:03,536 Epoch 1 Step: 77300 Batch Loss: 1.737996 Tokens per Sec: 20715, Lr: 0.000300\n", "2020-04-07 20:59:14,741 Epoch 1 Step: 77400 Batch Loss: 1.669374 Tokens per Sec: 20322, Lr: 0.000300\n", "2020-04-07 20:59:26,092 Epoch 1 Step: 77500 Batch Loss: 1.812358 Tokens per Sec: 20963, Lr: 0.000300\n", "2020-04-07 20:59:37,388 Epoch 1 Step: 77600 Batch Loss: 1.756553 Tokens per Sec: 20892, Lr: 0.000300\n", "2020-04-07 20:59:48,438 Epoch 1 Step: 77700 Batch Loss: 1.184197 Tokens per Sec: 20854, Lr: 0.000300\n", "2020-04-07 20:59:59,645 Epoch 1 Step: 77800 Batch Loss: 1.677460 Tokens per Sec: 20276, Lr: 0.000300\n", "2020-04-07 21:00:10,955 Epoch 1 Step: 77900 Batch Loss: 1.585014 Tokens per Sec: 20550, Lr: 0.000300\n", "2020-04-07 21:00:22,014 Epoch 1 Step: 78000 Batch Loss: 1.842488 Tokens per Sec: 20295, Lr: 0.000300\n", "2020-04-07 21:00:33,058 Hooray! New best validation result [ppl]!\n", "2020-04-07 21:00:33,059 Saving new checkpoint.\n", "2020-04-07 21:00:34,268 Example #0\n", "2020-04-07 21:00:34,268 \tSource: If you do , you will be choosing the best possible way of life .\n", "2020-04-07 21:00:34,269 \tReference: Edieke anamde ntre , ọwọrọ ke ememek mfọnn̄kan usụn̄ uwem .\n", "2020-04-07 21:00:34,269 \tHypothesis: Edieke anamde emi , afo eyemek mfọnn̄kan usụn̄ uwem .\n", "2020-04-07 21:00:34,269 Example #1\n", "2020-04-07 21:00:34,269 \tSource: They may even have been told as much by a clergyman .\n", "2020-04-07 21:00:34,269 \tReference: Akam ekeme ndidi se ọkwọrọ ederi eketịn̄de ọnọ mmọ edi oro .\n", "2020-04-07 21:00:34,270 \tHypothesis: Ekeme ndidi mme ọkwọrọ ederi ẹma ẹkam ẹdọhọ mmọ ke mmọ ẹma ẹkam ẹtịn̄ ẹban̄a mmimọ .\n", "2020-04-07 21:00:34,270 Example #2\n", "2020-04-07 21:00:34,270 \tSource: The same point is made at 2 Chronicles 5 : 9 .\n", "2020-04-07 21:00:34,270 \tReference: Ẹtịn̄ ukem ikọ oro ke 2 Chronicles 5 : 9 .\n", "2020-04-07 21:00:34,270 \tHypothesis: Ẹnam ukem oro ke 2 Chronicles 5 : 9 .\n", "2020-04-07 21:00:34,272 Example #3\n", "2020-04-07 21:00:34,274 \tSource: 59 - 61 C.E . ) , and from there he finds ways to preach about the Kingdom and teach “ the things concerning the Lord Jesus Christ . ” ​ — Acts 28 : 30 , 31 .\n", "2020-04-07 21:00:34,274 \tReference: Ẹkọbi Paul ẹtem ke ufọk esie ke Rome ke isua iba ( ke n̄kpọ nte isua 59 esịm 61 E.N . ) , ndien enye oyom usụn̄ do ọkwọrọ Obio Ubọn̄ onyụn̄ ekpep mbon en̄wen “ mme n̄kpọ emi ẹban̄ade Ọbọn̄ Jesus Christ . ” — Utom 28 : 30 , 31 .\n", "2020-04-07 21:00:34,275 \tHypothesis: 59 - 61 E.N . ) , ndien enye okụt usụn̄ ndikwọrọ mban̄a Obio Ubọn̄ nnyụn̄ n̄kpep “ mme n̄kpọ eke ẹban̄ade Ọbọn̄ Jesus Christ . ” — Utom 28 : 30 , 31 .\n", "2020-04-07 21:00:34,275 Validation result (greedy) at epoch 1, step 78000: bleu: 30.59, loss: 36591.0938, ppl: 4.7458, duration: 12.2599s\n", "2020-04-07 21:00:45,811 Epoch 1 Step: 78100 Batch Loss: 1.472596 Tokens per Sec: 20392, Lr: 0.000300\n", "2020-04-07 21:00:56,842 Epoch 1 Step: 78200 Batch Loss: 1.574353 Tokens per Sec: 20226, Lr: 0.000300\n", "2020-04-07 21:01:07,948 Epoch 1 Step: 78300 Batch Loss: 1.786395 Tokens per Sec: 20939, Lr: 0.000300\n", "2020-04-07 21:01:19,089 Epoch 1 Step: 78400 Batch Loss: 1.846857 Tokens per Sec: 20406, Lr: 0.000300\n", "2020-04-07 21:01:22,017 Epoch 1: total training loss 5766.25\n", "2020-04-07 21:01:22,018 EPOCH 2\n", "2020-04-07 21:01:30,622 Epoch 2 Step: 78500 Batch Loss: 1.770589 Tokens per Sec: 19627, Lr: 0.000300\n", "2020-04-07 21:01:41,762 Epoch 2 Step: 78600 Batch Loss: 1.378725 Tokens per Sec: 20754, Lr: 0.000300\n", "2020-04-07 21:01:52,959 Epoch 2 Step: 78700 Batch Loss: 1.965990 Tokens per Sec: 20977, Lr: 0.000300\n", "2020-04-07 21:02:04,061 Epoch 2 Step: 78800 Batch Loss: 1.896216 Tokens per Sec: 20856, Lr: 0.000300\n", "2020-04-07 21:02:15,106 Epoch 2 Step: 78900 Batch Loss: 1.573402 Tokens per Sec: 20523, Lr: 0.000300\n", "2020-04-07 21:02:26,252 Epoch 2 Step: 79000 Batch Loss: 1.179836 Tokens per Sec: 20752, Lr: 0.000300\n", "2020-04-07 21:02:37,172 Hooray! New best validation result [ppl]!\n", "2020-04-07 21:02:37,172 Saving new checkpoint.\n", "2020-04-07 21:02:38,362 Example #0\n", "2020-04-07 21:02:38,363 \tSource: If you do , you will be choosing the best possible way of life .\n", "2020-04-07 21:02:38,363 \tReference: Edieke anamde ntre , ọwọrọ ke ememek mfọnn̄kan usụn̄ uwem .\n", "2020-04-07 21:02:38,363 \tHypothesis: Edieke anamde emi , afo eyemek mfọnn̄kan usụn̄ uwem .\n", "2020-04-07 21:02:38,363 Example #1\n", "2020-04-07 21:02:38,363 \tSource: They may even have been told as much by a clergyman .\n", "2020-04-07 21:02:38,364 \tReference: Akam ekeme ndidi se ọkwọrọ ederi eketịn̄de ọnọ mmọ edi oro .\n", "2020-04-07 21:02:38,364 \tHypothesis: Ekeme ndidi ọkwọrọ ederi ama akam etịn̄ se mmọ ẹketịn̄de .\n", "2020-04-07 21:02:38,364 Example #2\n", "2020-04-07 21:02:38,364 \tSource: The same point is made at 2 Chronicles 5 : 9 .\n", "2020-04-07 21:02:38,364 \tReference: Ẹtịn̄ ukem ikọ oro ke 2 Chronicles 5 : 9 .\n", "2020-04-07 21:02:38,364 \tHypothesis: Ẹwet ukem n̄kpọ oro ke 2 Chronicles 5 : 9 .\n", "2020-04-07 21:02:38,365 Example #3\n", "2020-04-07 21:02:38,365 \tSource: 59 - 61 C.E . ) , and from there he finds ways to preach about the Kingdom and teach “ the things concerning the Lord Jesus Christ . ” ​ — Acts 28 : 30 , 31 .\n", "2020-04-07 21:02:38,365 \tReference: Ẹkọbi Paul ẹtem ke ufọk esie ke Rome ke isua iba ( ke n̄kpọ nte isua 59 esịm 61 E.N . ) , ndien enye oyom usụn̄ do ọkwọrọ Obio Ubọn̄ onyụn̄ ekpep mbon en̄wen “ mme n̄kpọ emi ẹban̄ade Ọbọn̄ Jesus Christ . ” — Utom 28 : 30 , 31 .\n", "2020-04-07 21:02:38,365 \tHypothesis: 59 - 61 E.N . ) , ndien enye okụt usụn̄ ndikwọrọ mban̄a Obio Ubọn̄ nnyụn̄ n̄kpep “ mme n̄kpọ oro ẹban̄ade Ọbọn̄ Jesus Christ . ” — Utom 28 : 30 , 31 .\n", "2020-04-07 21:02:38,365 Validation result (greedy) at epoch 2, step 79000: bleu: 30.31, loss: 36588.3047, ppl: 4.7453, duration: 12.1131s\n", "2020-04-07 21:02:49,890 Epoch 2 Step: 79100 Batch Loss: 1.628276 Tokens per Sec: 20131, Lr: 0.000300\n", "2020-04-07 21:03:01,063 Epoch 2 Step: 79200 Batch Loss: 1.773140 Tokens per Sec: 20635, Lr: 0.000300\n", "2020-04-07 21:03:12,291 Epoch 2 Step: 79300 Batch Loss: 1.876546 Tokens per Sec: 20732, Lr: 0.000300\n", "2020-04-07 21:03:23,725 Epoch 2 Step: 79400 Batch Loss: 1.926523 Tokens per Sec: 20328, Lr: 0.000300\n", "2020-04-07 21:03:35,009 Epoch 2 Step: 79500 Batch Loss: 1.648053 Tokens per Sec: 20269, Lr: 0.000300\n", "2020-04-07 21:03:46,388 Epoch 2 Step: 79600 Batch Loss: 1.593967 Tokens per Sec: 19602, Lr: 0.000300\n", "2020-04-07 21:03:57,857 Epoch 2 Step: 79700 Batch Loss: 1.696361 Tokens per Sec: 20187, Lr: 0.000300\n", "2020-04-07 21:04:09,187 Epoch 2 Step: 79800 Batch Loss: 1.760031 Tokens per Sec: 19984, Lr: 0.000300\n", "2020-04-07 21:04:20,515 Epoch 2 Step: 79900 Batch Loss: 1.584432 Tokens per Sec: 19808, Lr: 0.000300\n", "2020-04-07 21:04:31,860 Epoch 2 Step: 80000 Batch Loss: 1.820143 Tokens per Sec: 20865, Lr: 0.000300\n", "2020-04-07 21:04:43,200 Hooray! New best validation result [ppl]!\n", "2020-04-07 21:04:43,201 Saving new checkpoint.\n", "2020-04-07 21:04:44,491 Example #0\n", "2020-04-07 21:04:44,492 \tSource: If you do , you will be choosing the best possible way of life .\n", "2020-04-07 21:04:44,492 \tReference: Edieke anamde ntre , ọwọrọ ke ememek mfọnn̄kan usụn̄ uwem .\n", "2020-04-07 21:04:44,492 \tHypothesis: Edieke anamde emi , afo eyemek mfọnn̄kan usụn̄ uwem .\n", "2020-04-07 21:04:44,493 Example #1\n", "2020-04-07 21:04:44,493 \tSource: They may even have been told as much by a clergyman .\n", "2020-04-07 21:04:44,493 \tReference: Akam ekeme ndidi se ọkwọrọ ederi eketịn̄de ọnọ mmọ edi oro .\n", "2020-04-07 21:04:44,493 \tHypothesis: Ekeme ndidi mme ọkwọrọ ederi ẹma ẹkam ẹtịn̄ ẹban̄a mmọ ukem nte mme ọkwọrọ ederi .\n", "2020-04-07 21:04:44,494 Example #2\n", "2020-04-07 21:04:44,494 \tSource: The same point is made at 2 Chronicles 5 : 9 .\n", "2020-04-07 21:04:44,494 \tReference: Ẹtịn̄ ukem ikọ oro ke 2 Chronicles 5 : 9 .\n", "2020-04-07 21:04:44,494 \tHypothesis: Ẹwet ukem n̄kpọ oro ke 2 Chronicles 5 : 9 .\n", "2020-04-07 21:04:44,495 Example #3\n", "2020-04-07 21:04:44,495 \tSource: 59 - 61 C.E . ) , and from there he finds ways to preach about the Kingdom and teach “ the things concerning the Lord Jesus Christ . ” ​ — Acts 28 : 30 , 31 .\n", "2020-04-07 21:04:44,495 \tReference: Ẹkọbi Paul ẹtem ke ufọk esie ke Rome ke isua iba ( ke n̄kpọ nte isua 59 esịm 61 E.N . ) , ndien enye oyom usụn̄ do ọkwọrọ Obio Ubọn̄ onyụn̄ ekpep mbon en̄wen “ mme n̄kpọ emi ẹban̄ade Ọbọn̄ Jesus Christ . ” — Utom 28 : 30 , 31 .\n", "2020-04-07 21:04:44,496 \tHypothesis: 59 - 61 E.N . ) , ndien enye okụt usụn̄ ndikwọrọ mban̄a Obio Ubọn̄ nnyụn̄ n̄kpep “ mme n̄kpọ eke ẹban̄ade Ọbọn̄ Jesus Christ . ” — Utom 28 : 30 , 31 .\n", "2020-04-07 21:04:44,496 Validation result (greedy) at epoch 2, step 80000: bleu: 30.38, loss: 36509.8242, ppl: 4.7294, duration: 12.6358s\n", "2020-04-07 21:04:56,039 Epoch 2 Step: 80100 Batch Loss: 1.555527 Tokens per Sec: 19992, Lr: 0.000300\n", "2020-04-07 21:05:07,283 Epoch 2 Step: 80200 Batch Loss: 1.488823 Tokens per Sec: 20541, Lr: 0.000300\n", "2020-04-07 21:05:18,485 Epoch 2 Step: 80300 Batch Loss: 1.738822 Tokens per Sec: 20119, Lr: 0.000300\n", "2020-04-07 21:05:29,816 Epoch 2 Step: 80400 Batch Loss: 1.605760 Tokens per Sec: 20493, Lr: 0.000300\n", "2020-04-07 21:05:41,042 Epoch 2 Step: 80500 Batch Loss: 1.637010 Tokens per Sec: 20369, Lr: 0.000300\n", "2020-04-07 21:05:52,300 Epoch 2 Step: 80600 Batch Loss: 1.643769 Tokens per Sec: 20523, Lr: 0.000300\n", "2020-04-07 21:06:03,557 Epoch 2 Step: 80700 Batch Loss: 1.717212 Tokens per Sec: 20824, Lr: 0.000300\n", "2020-04-07 21:06:14,660 Epoch 2 Step: 80800 Batch Loss: 1.777844 Tokens per Sec: 20179, Lr: 0.000300\n", "2020-04-07 21:06:25,715 Epoch 2 Step: 80900 Batch Loss: 1.996720 Tokens per Sec: 20187, Lr: 0.000300\n", "2020-04-07 21:06:36,917 Epoch 2 Step: 81000 Batch Loss: 1.505029 Tokens per Sec: 20542, Lr: 0.000300\n", "2020-04-07 21:06:48,173 Hooray! New best validation result [ppl]!\n", "2020-04-07 21:06:48,173 Saving new checkpoint.\n", "2020-04-07 21:06:49,499 Example #0\n", "2020-04-07 21:06:49,500 \tSource: If you do , you will be choosing the best possible way of life .\n", "2020-04-07 21:06:49,500 \tReference: Edieke anamde ntre , ọwọrọ ke ememek mfọnn̄kan usụn̄ uwem .\n", "2020-04-07 21:06:49,500 \tHypothesis: Edieke anamde emi , afo eyemek mfọnn̄kan usụn̄ uwem .\n", "2020-04-07 21:06:49,501 Example #1\n", "2020-04-07 21:06:49,501 \tSource: They may even have been told as much by a clergyman .\n", "2020-04-07 21:06:49,501 \tReference: Akam ekeme ndidi se ọkwọrọ ederi eketịn̄de ọnọ mmọ edi oro .\n", "2020-04-07 21:06:49,501 \tHypothesis: Ekeme ndidi ọkwọrọ ederi ama akam ọdọhọ mmọ ke mmọ ẹma ẹkam ẹtịn̄ ẹban̄a mmimọ .\n", "2020-04-07 21:06:49,502 Example #2\n", "2020-04-07 21:06:49,504 \tSource: The same point is made at 2 Chronicles 5 : 9 .\n", "2020-04-07 21:06:49,506 \tReference: Ẹtịn̄ ukem ikọ oro ke 2 Chronicles 5 : 9 .\n", "2020-04-07 21:06:49,506 \tHypothesis: Ẹnam ukem n̄kpọ oro ke 2 Chronicle 5 : 9 .\n", "2020-04-07 21:06:49,507 Example #3\n", "2020-04-07 21:06:49,507 \tSource: 59 - 61 C.E . ) , and from there he finds ways to preach about the Kingdom and teach “ the things concerning the Lord Jesus Christ . ” ​ — Acts 28 : 30 , 31 .\n", "2020-04-07 21:06:49,507 \tReference: Ẹkọbi Paul ẹtem ke ufọk esie ke Rome ke isua iba ( ke n̄kpọ nte isua 59 esịm 61 E.N . ) , ndien enye oyom usụn̄ do ọkwọrọ Obio Ubọn̄ onyụn̄ ekpep mbon en̄wen “ mme n̄kpọ emi ẹban̄ade Ọbọn̄ Jesus Christ . ” — Utom 28 : 30 , 31 .\n", "2020-04-07 21:06:49,508 \tHypothesis: 59 - 61 E.N . ) , ndien enye okụt usụn̄ ndikwọrọ Obio Ubọn̄ nnyụn̄ n̄kpep “ mme n̄kpọ ẹban̄ade Ọbọn̄ Jesus Christ . ” — Utom 28 : 30 , 31 .\n", "2020-04-07 21:06:49,508 Validation result (greedy) at epoch 2, step 81000: bleu: 30.77, loss: 36469.5391, ppl: 4.7213, duration: 12.5905s\n", "2020-04-07 21:07:01,162 Epoch 2 Step: 81100 Batch Loss: 1.795741 Tokens per Sec: 20264, Lr: 0.000300\n", "2020-04-07 21:07:12,487 Epoch 2 Step: 81200 Batch Loss: 1.663168 Tokens per Sec: 20997, Lr: 0.000300\n", "2020-04-07 21:07:23,728 Epoch 2 Step: 81300 Batch Loss: 1.696081 Tokens per Sec: 20511, Lr: 0.000300\n", "2020-04-07 21:07:34,823 Epoch 2 Step: 81400 Batch Loss: 1.563580 Tokens per Sec: 20583, Lr: 0.000300\n", "2020-04-07 21:07:46,048 Epoch 2 Step: 81500 Batch Loss: 1.545787 Tokens per Sec: 20787, Lr: 0.000300\n", "2020-04-07 21:07:57,180 Epoch 2 Step: 81600 Batch Loss: 1.634702 Tokens per Sec: 20406, Lr: 0.000300\n", "2020-04-07 21:08:08,468 Epoch 2 Step: 81700 Batch Loss: 1.769388 Tokens per Sec: 20685, Lr: 0.000300\n", "2020-04-07 21:08:19,736 Epoch 2 Step: 81800 Batch Loss: 1.751492 Tokens per Sec: 20813, Lr: 0.000300\n", "2020-04-07 21:08:26,597 Epoch 2: total training loss 5752.46\n", "2020-04-07 21:08:26,597 EPOCH 3\n", "2020-04-07 21:08:31,454 Epoch 3 Step: 81900 Batch Loss: 1.622603 Tokens per Sec: 18971, Lr: 0.000300\n", "2020-04-07 21:08:42,744 Epoch 3 Step: 82000 Batch Loss: 1.742999 Tokens per Sec: 20507, Lr: 0.000300\n", "2020-04-07 21:08:54,244 Example #0\n", "2020-04-07 21:08:54,245 \tSource: If you do , you will be choosing the best possible way of life .\n", "2020-04-07 21:08:54,245 \tReference: Edieke anamde ntre , ọwọrọ ke ememek mfọnn̄kan usụn̄ uwem .\n", "2020-04-07 21:08:54,246 \tHypothesis: Edieke anamde emi , afo eyemek mfọnn̄kan usụn̄ uwem .\n", "2020-04-07 21:08:54,246 Example #1\n", "2020-04-07 21:08:54,246 \tSource: They may even have been told as much by a clergyman .\n", "2020-04-07 21:08:54,246 \tReference: Akam ekeme ndidi se ọkwọrọ ederi eketịn̄de ọnọ mmọ edi oro .\n", "2020-04-07 21:08:54,247 \tHypothesis: Ekeme ndidi ọkwọrọ ederi kiet ama akam ọdọhọ mmọ ke mmọ ẹma ẹkam ẹtịn̄ se mmọ ẹketịn̄de .\n", "2020-04-07 21:08:54,247 Example #2\n", "2020-04-07 21:08:54,247 \tSource: The same point is made at 2 Chronicles 5 : 9 .\n", "2020-04-07 21:08:54,247 \tReference: Ẹtịn̄ ukem ikọ oro ke 2 Chronicles 5 : 9 .\n", "2020-04-07 21:08:54,247 \tHypothesis: Ẹnam ukem n̄kpọ oro ke 2 Chronicles 5 : 9 .\n", "2020-04-07 21:08:54,248 Example #3\n", "2020-04-07 21:08:54,248 \tSource: 59 - 61 C.E . ) , and from there he finds ways to preach about the Kingdom and teach “ the things concerning the Lord Jesus Christ . ” ​ — Acts 28 : 30 , 31 .\n", "2020-04-07 21:08:54,248 \tReference: Ẹkọbi Paul ẹtem ke ufọk esie ke Rome ke isua iba ( ke n̄kpọ nte isua 59 esịm 61 E.N . ) , ndien enye oyom usụn̄ do ọkwọrọ Obio Ubọn̄ onyụn̄ ekpep mbon en̄wen “ mme n̄kpọ emi ẹban̄ade Ọbọn̄ Jesus Christ . ” — Utom 28 : 30 , 31 .\n", "2020-04-07 21:08:54,248 \tHypothesis: 59 - 61 E.N . ) , ndien enye okụt usụn̄ ndikwọrọ mban̄a Obio Ubọn̄ nnyụn̄ n̄kpep “ mme n̄kpọ ẹban̄ade Ọbọn̄ Jesus Christ . ” — Utom 28 : 30 , 31 .\n", "2020-04-07 21:08:54,248 Validation result (greedy) at epoch 3, step 82000: bleu: 30.70, loss: 36578.5469, ppl: 4.7433, duration: 11.5042s\n", "2020-04-07 21:09:05,400 Epoch 3 Step: 82100 Batch Loss: 1.654117 Tokens per Sec: 20566, Lr: 0.000300\n", "2020-04-07 21:09:16,698 Epoch 3 Step: 82200 Batch Loss: 2.097586 Tokens per Sec: 20356, Lr: 0.000300\n", "2020-04-07 21:09:27,839 Epoch 3 Step: 82300 Batch Loss: 1.614651 Tokens per Sec: 20274, Lr: 0.000300\n", "2020-04-07 21:09:39,090 Epoch 3 Step: 82400 Batch Loss: 1.883934 Tokens per Sec: 20091, Lr: 0.000300\n", "2020-04-07 21:09:50,233 Epoch 3 Step: 82500 Batch Loss: 1.728126 Tokens per Sec: 20985, Lr: 0.000300\n", "2020-04-07 21:10:01,409 Epoch 3 Step: 82600 Batch Loss: 1.831730 Tokens per Sec: 20477, Lr: 0.000300\n", "2020-04-07 21:10:12,606 Epoch 3 Step: 82700 Batch Loss: 1.697250 Tokens per Sec: 20794, Lr: 0.000300\n", "2020-04-07 21:10:23,809 Epoch 3 Step: 82800 Batch Loss: 1.662617 Tokens per Sec: 21062, Lr: 0.000300\n", "2020-04-07 21:10:34,963 Epoch 3 Step: 82900 Batch Loss: 1.601122 Tokens per Sec: 20384, Lr: 0.000300\n", "2020-04-07 21:10:46,195 Epoch 3 Step: 83000 Batch Loss: 1.814855 Tokens per Sec: 20695, Lr: 0.000300\n", "2020-04-07 21:10:58,561 Hooray! New best validation result [ppl]!\n", "2020-04-07 21:10:58,561 Saving new checkpoint.\n", "2020-04-07 21:10:59,917 Example #0\n", "2020-04-07 21:10:59,917 \tSource: If you do , you will be choosing the best possible way of life .\n", "2020-04-07 21:10:59,917 \tReference: Edieke anamde ntre , ọwọrọ ke ememek mfọnn̄kan usụn̄ uwem .\n", "2020-04-07 21:10:59,918 \tHypothesis: Edieke anamde emi , afo eyemek mfọnn̄kan usụn̄ uwem .\n", "2020-04-07 21:10:59,918 Example #1\n", "2020-04-07 21:10:59,918 \tSource: They may even have been told as much by a clergyman .\n", "2020-04-07 21:10:59,918 \tReference: Akam ekeme ndidi se ọkwọrọ ederi eketịn̄de ọnọ mmọ edi oro .\n", "2020-04-07 21:10:59,918 \tHypothesis: Ekeme ndidi ọkwọrọ ederi kiet ama akam ọdọhọ mmọ ke mmọ ẹma ẹkam ẹtịn̄ se mmọ ẹkekeme .\n", "2020-04-07 21:10:59,919 Example #2\n", "2020-04-07 21:10:59,919 \tSource: The same point is made at 2 Chronicles 5 : 9 .\n", "2020-04-07 21:10:59,919 \tReference: Ẹtịn̄ ukem ikọ oro ke 2 Chronicles 5 : 9 .\n", "2020-04-07 21:10:59,919 \tHypothesis: Ẹnam ukem n̄kpọ oro ke 2 Chronicles 5 : 9 .\n", "2020-04-07 21:10:59,919 Example #3\n", "2020-04-07 21:10:59,920 \tSource: 59 - 61 C.E . ) , and from there he finds ways to preach about the Kingdom and teach “ the things concerning the Lord Jesus Christ . ” ​ — Acts 28 : 30 , 31 .\n", "2020-04-07 21:10:59,920 \tReference: Ẹkọbi Paul ẹtem ke ufọk esie ke Rome ke isua iba ( ke n̄kpọ nte isua 59 esịm 61 E.N . ) , ndien enye oyom usụn̄ do ọkwọrọ Obio Ubọn̄ onyụn̄ ekpep mbon en̄wen “ mme n̄kpọ emi ẹban̄ade Ọbọn̄ Jesus Christ . ” — Utom 28 : 30 , 31 .\n", "2020-04-07 21:10:59,920 \tHypothesis: 59 - 61 E.N . ) , ndien enye okụt usụn̄ ndikwọrọ Obio Ubọn̄ nnyụn̄ n̄kpep “ mme n̄kpọ ẹban̄ade Ọbọn̄ Jesus Christ . ” — Utom 28 : 30 , 31 .\n", "2020-04-07 21:10:59,920 Validation result (greedy) at epoch 3, step 83000: bleu: 30.75, loss: 36308.5273, ppl: 4.6891, duration: 13.7245s\n", "2020-04-07 21:11:11,420 Epoch 3 Step: 83100 Batch Loss: 1.809379 Tokens per Sec: 20280, Lr: 0.000300\n", "2020-04-07 21:11:22,578 Epoch 3 Step: 83200 Batch Loss: 1.724175 Tokens per Sec: 20662, Lr: 0.000300\n", "2020-04-07 21:11:33,631 Epoch 3 Step: 83300 Batch Loss: 1.757362 Tokens per Sec: 20542, Lr: 0.000300\n", "2020-04-07 21:11:44,853 Epoch 3 Step: 83400 Batch Loss: 1.978654 Tokens per Sec: 20568, Lr: 0.000300\n", "2020-04-07 21:11:56,189 Epoch 3 Step: 83500 Batch Loss: 1.734641 Tokens per Sec: 20209, Lr: 0.000300\n", "2020-04-07 21:12:07,551 Epoch 3 Step: 83600 Batch Loss: 1.351897 Tokens per Sec: 20323, Lr: 0.000300\n", "2020-04-07 21:12:18,959 Epoch 3 Step: 83700 Batch Loss: 1.648239 Tokens per Sec: 20271, Lr: 0.000300\n", "2020-04-07 21:12:30,249 Epoch 3 Step: 83800 Batch Loss: 1.649844 Tokens per Sec: 20061, Lr: 0.000300\n", "2020-04-07 21:12:41,664 Epoch 3 Step: 83900 Batch Loss: 1.705879 Tokens per Sec: 20261, Lr: 0.000300\n", "2020-04-07 21:12:52,997 Epoch 3 Step: 84000 Batch Loss: 1.523355 Tokens per Sec: 20206, Lr: 0.000300\n", "2020-04-07 21:13:04,286 Example #0\n", "2020-04-07 21:13:04,287 \tSource: If you do , you will be choosing the best possible way of life .\n", "2020-04-07 21:13:04,287 \tReference: Edieke anamde ntre , ọwọrọ ke ememek mfọnn̄kan usụn̄ uwem .\n", "2020-04-07 21:13:04,287 \tHypothesis: Edieke anamde emi , afo eyemek mfọnn̄kan usụn̄ uwem .\n", "2020-04-07 21:13:04,288 Example #1\n", "2020-04-07 21:13:04,288 \tSource: They may even have been told as much by a clergyman .\n", "2020-04-07 21:13:04,288 \tReference: Akam ekeme ndidi se ọkwọrọ ederi eketịn̄de ọnọ mmọ edi oro .\n", "2020-04-07 21:13:04,288 \tHypothesis: Ekeme ndidi mme ọkwọrọ ederi ẹma ẹkam ẹdọhọ mmọ nte mme ọkwọrọ ederi .\n", "2020-04-07 21:13:04,288 Example #2\n", "2020-04-07 21:13:04,289 \tSource: The same point is made at 2 Chronicles 5 : 9 .\n", "2020-04-07 21:13:04,289 \tReference: Ẹtịn̄ ukem ikọ oro ke 2 Chronicles 5 : 9 .\n", "2020-04-07 21:13:04,289 \tHypothesis: Ẹnam ukem n̄kpọ oro ke 2 Chronicles 5 : 9 .\n", "2020-04-07 21:13:04,289 Example #3\n", "2020-04-07 21:13:04,289 \tSource: 59 - 61 C.E . ) , and from there he finds ways to preach about the Kingdom and teach “ the things concerning the Lord Jesus Christ . ” ​ — Acts 28 : 30 , 31 .\n", "2020-04-07 21:13:04,290 \tReference: Ẹkọbi Paul ẹtem ke ufọk esie ke Rome ke isua iba ( ke n̄kpọ nte isua 59 esịm 61 E.N . ) , ndien enye oyom usụn̄ do ọkwọrọ Obio Ubọn̄ onyụn̄ ekpep mbon en̄wen “ mme n̄kpọ emi ẹban̄ade Ọbọn̄ Jesus Christ . ” — Utom 28 : 30 , 31 .\n", "2020-04-07 21:13:04,290 \tHypothesis: 59 - 61 E.N . ) , ndien enye okụt usụn̄ ndikwọrọ mban̄a Obio Ubọn̄ nnyụn̄ n̄kpep “ mme n̄kpọ eke ẹban̄ade Ọbọn̄ Jesus Christ . ” — Utom 28 : 30 , 31 .\n", "2020-04-07 21:13:04,290 Validation result (greedy) at epoch 3, step 84000: bleu: 30.47, loss: 36337.9961, ppl: 4.6950, duration: 11.2926s\n", "2020-04-07 21:13:15,691 Epoch 3 Step: 84100 Batch Loss: 1.707605 Tokens per Sec: 20388, Lr: 0.000300\n", "2020-04-07 21:13:26,990 Epoch 3 Step: 84200 Batch Loss: 1.853164 Tokens per Sec: 20639, Lr: 0.000300\n", "2020-04-07 21:13:38,427 Epoch 3 Step: 84300 Batch Loss: 1.644427 Tokens per Sec: 20566, Lr: 0.000300\n", "2020-04-07 21:13:49,736 Epoch 3 Step: 84400 Batch Loss: 1.295440 Tokens per Sec: 20443, Lr: 0.000300\n", "2020-04-07 21:14:01,029 Epoch 3 Step: 84500 Batch Loss: 1.628160 Tokens per Sec: 20248, Lr: 0.000300\n", "2020-04-07 21:14:12,394 Epoch 3 Step: 84600 Batch Loss: 1.839830 Tokens per Sec: 20362, Lr: 0.000300\n", "2020-04-07 21:14:23,701 Epoch 3 Step: 84700 Batch Loss: 1.710175 Tokens per Sec: 20039, Lr: 0.000300\n", "2020-04-07 21:14:34,953 Epoch 3 Step: 84800 Batch Loss: 1.572069 Tokens per Sec: 20403, Lr: 0.000300\n", "2020-04-07 21:14:46,371 Epoch 3 Step: 84900 Batch Loss: 1.901179 Tokens per Sec: 19940, Lr: 0.000300\n", "2020-04-07 21:14:57,709 Epoch 3 Step: 85000 Batch Loss: 1.764309 Tokens per Sec: 20247, Lr: 0.000300\n", "2020-04-07 21:15:08,815 Hooray! New best validation result [ppl]!\n", "2020-04-07 21:15:08,815 Saving new checkpoint.\n", "2020-04-07 21:15:10,161 Example #0\n", "2020-04-07 21:15:10,161 \tSource: If you do , you will be choosing the best possible way of life .\n", "2020-04-07 21:15:10,162 \tReference: Edieke anamde ntre , ọwọrọ ke ememek mfọnn̄kan usụn̄ uwem .\n", "2020-04-07 21:15:10,162 \tHypothesis: Edieke anamde emi , afo eyemek mfọnn̄kan usụn̄ uwem .\n", "2020-04-07 21:15:10,162 Example #1\n", "2020-04-07 21:15:10,162 \tSource: They may even have been told as much by a clergyman .\n", "2020-04-07 21:15:10,162 \tReference: Akam ekeme ndidi se ọkwọrọ ederi eketịn̄de ọnọ mmọ edi oro .\n", "2020-04-07 21:15:10,163 \tHypothesis: Ekeme ndidi ọkwọrọ ederi ama akam ọdọhọ mmọ ke mmọ ẹma ẹkam ẹtịn̄ se mmọ ẹketịn̄de .\n", "2020-04-07 21:15:10,163 Example #2\n", "2020-04-07 21:15:10,163 \tSource: The same point is made at 2 Chronicles 5 : 9 .\n", "2020-04-07 21:15:10,163 \tReference: Ẹtịn̄ ukem ikọ oro ke 2 Chronicles 5 : 9 .\n", "2020-04-07 21:15:10,164 \tHypothesis: Ẹnam ukem n̄kpọ oro ke 2 Chronicles 5 : 9 .\n", "2020-04-07 21:15:10,164 Example #3\n", "2020-04-07 21:15:10,164 \tSource: 59 - 61 C.E . ) , and from there he finds ways to preach about the Kingdom and teach “ the things concerning the Lord Jesus Christ . ” ​ — Acts 28 : 30 , 31 .\n", "2020-04-07 21:15:10,164 \tReference: Ẹkọbi Paul ẹtem ke ufọk esie ke Rome ke isua iba ( ke n̄kpọ nte isua 59 esịm 61 E.N . ) , ndien enye oyom usụn̄ do ọkwọrọ Obio Ubọn̄ onyụn̄ ekpep mbon en̄wen “ mme n̄kpọ emi ẹban̄ade Ọbọn̄ Jesus Christ . ” — Utom 28 : 30 , 31 .\n", "2020-04-07 21:15:10,164 \tHypothesis: 59 - 61 E.N . ) , ndien enye okụt usụn̄ ndikwọrọ Obio Ubọn̄ nnyụn̄ n̄kpep “ mme n̄kpọ ẹban̄ade Ọbọn̄ Jesus Christ . ” — Utom 28 : 30 , 31 .\n", "2020-04-07 21:15:10,165 Validation result (greedy) at epoch 3, step 85000: bleu: 30.60, loss: 36262.0664, ppl: 4.6798, duration: 12.4547s\n", "2020-04-07 21:15:21,876 Epoch 3 Step: 85100 Batch Loss: 1.628079 Tokens per Sec: 20062, Lr: 0.000300\n", "2020-04-07 21:15:33,192 Epoch 3 Step: 85200 Batch Loss: 1.397948 Tokens per Sec: 20355, Lr: 0.000300\n", "2020-04-07 21:15:43,922 Epoch 3: total training loss 5714.55\n", "2020-04-07 21:15:43,922 Training ended after 3 epochs.\n", "2020-04-07 21:15:43,922 Best validation result (greedy) at step 85000: 4.68 ppl.\n", "2020-04-07 21:16:05,650 dev bleu: 31.00 [Beam search decoding with beam size = 5 and alpha = 1.0]\n", "2020-04-07 21:16:05,655 Translations saved to: /content/drive/My Drive/masakhane/model-temp/00085000.hyps.dev\n", "2020-04-07 21:16:35,421 test bleu: 33.48 [Beam search decoding with beam size = 5 and alpha = 1.0]\n", "2020-04-07 21:16:35,428 Translations saved to: /content/drive/My Drive/masakhane/model-temp/00085000.hyps.test\n" ] } ], "source": [ "# Train the model\n", "# You can press Ctrl-C to stop. And then run the next cell to save your checkpoints! \n", "!cd joeynmt; python3 -m joeynmt train configs/transformer_$src$tgt.yaml" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "MBoDS09JM807" }, "outputs": [], "source": [ "# Copy the created models from the temporary storage to main storage on google drive for persistant storage \n", "!cp -r \"/content/drive/My Drive/masakhane/model-temp/\"* \"$gdrive_path/models/${src}${tgt}_transformer/\"" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 187 }, "colab_type": "code", "id": "n94wlrCjVc17", "outputId": "1d2b2f10-e1cf-4a22-be10-4d4883f5a0d7" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Steps: 76000\tLoss: 36773.73438\tPPL: 4.78286\tbleu: 30.36739\tLR: 0.00030000\t\n", "Steps: 77000\tLoss: 36633.97656\tPPL: 4.75450\tbleu: 30.70387\tLR: 0.00030000\t\n", "Steps: 78000\tLoss: 36591.09375\tPPL: 4.74583\tbleu: 30.58997\tLR: 0.00030000\t*\n", "Steps: 79000\tLoss: 36588.30469\tPPL: 4.74527\tbleu: 30.31250\tLR: 0.00030000\t*\n", "Steps: 80000\tLoss: 36509.82422\tPPL: 4.72945\tbleu: 30.37741\tLR: 0.00030000\t*\n", "Steps: 81000\tLoss: 36469.53906\tPPL: 4.72134\tbleu: 30.77292\tLR: 0.00030000\t*\n", "Steps: 82000\tLoss: 36578.54688\tPPL: 4.74330\tbleu: 30.69861\tLR: 0.00030000\t\n", "Steps: 83000\tLoss: 36308.52734\tPPL: 4.68910\tbleu: 30.75077\tLR: 0.00030000\t*\n", "Steps: 84000\tLoss: 36337.99609\tPPL: 4.69499\tbleu: 30.46578\tLR: 0.00030000\t\n", "Steps: 85000\tLoss: 36262.06641\tPPL: 4.67984\tbleu: 30.59531\tLR: 0.00030000\t*\n" ] } ], "source": [ "# Output our validation accuracy\n", "! cat \"$gdrive_path/models/${src}${tgt}_transformer/validations.txt\"" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 68 }, "colab_type": "code", "id": "66WhRE9lIhoD", "outputId": "bac423c3-182d-41a8-8ca2-cd0bd74196dc" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "2020-04-07 21:16:45,174 Hello! This is Joey-NMT.\n", "2020-04-07 21:17:10,964 dev bleu: 31.00 [Beam search decoding with beam size = 5 and alpha = 1.0]\n", "2020-04-07 21:17:40,602 test bleu: 33.48 [Beam search decoding with beam size = 5 and alpha = 1.0]\n" ] } ], "source": [ "# Test our model\n", "! cd joeynmt; python3 -m joeynmt test \"$gdrive_path/models/${src}${tgt}_transformer/config.yaml\"\n" ] } ], "metadata": { "accelerator": "GPU", "colab": { "collapsed_sections": [], "include_colab_link": true, "name": "en_efi_jw300_notebook.ipynb", "provenance": [] }, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.3" } }, "nbformat": 4, "nbformat_minor": 1 }