{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [],
   "source": [
    "# ML_in_Finance-Backpropagation\n",
    "# Author: Matthew Dixon\n",
    "# Version: 1.0 (24.7.2019)\n",
    "# License: MIT\n",
    "# Email: matthew.dixon@iit.edu\n",
    "# Notes: tested on Mac OS X with Python 3.6.9 with the following packages\n",
    "# numpy=1.18.1, keras=2.3.1, tensorflow=2.0.0\n",
    "# Citation: Please cite the following reference if this notebook is used for research purposes:\n",
    "# Dixon M.F., Halperin I. and P. Bilokon, Machine Learning in Finance: From Theory to Practice, Springer Graduate textbook Series, 2020. \n",
    "# This notebook is courtesy of Justin Li, Imperial College (jcl514@ic.ac.uk) and demonstrates backpropagation."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Overview\n",
    "This notebooks demonstrate the back-propagation algorithm in detail and compares the results of an implementation with tensorflow. See Section 5.1 of Chapter 4 for further details."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "_cell_guid": "b1076dfc-b9ad-4769-8c92-a6c4dae69d19",
    "_uuid": "8f2839f25d086af736a60e9eeb907d3b93b6e0e5"
   },
   "outputs": [],
   "source": [
    "import numpy as np \n",
    "from numpy.linalg import norm\n",
    "\n",
    "import copy\n",
    "import os"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "_cell_guid": "79c7e3d0-c299-4dcb-8224-4455121ee9b0",
    "_uuid": "d629ff2d2480ee46fbb7e2d37f6b5fab8052498a"
   },
   "outputs": [],
   "source": [
    "def relu(x):\n",
    "    return x*(np.sign(x)+1.)/2."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "_uuid": "ab7671c8b804e06579cfab4d74d8c6fec680eb0a"
   },
   "outputs": [],
   "source": [
    "def sigmoid(x):\n",
    "    return 1./(1.+np.exp(-x))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "_uuid": "9644cbf438ea83c5b6797eea3d2d8d61d32c673d"
   },
   "outputs": [],
   "source": [
    "def softmax(x):\n",
    "    return np.exp(x)/sum(np.exp(x))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "_uuid": "6aba0cf2b87a1a1f8c040f3f4df19a8d752501da"
   },
   "outputs": [],
   "source": [
    "def mynorm(Z):\n",
    "    return np.sqrt(np.mean(Z**2))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_uuid": "ba0305a6cb5cd493c395c105bf2b7f01442893b3"
   },
   "source": [
    "Let us consider a feed-forward architecture with an input layer, $L-1$ hidden layers and one output layer, with $K$ units in the output layer. As a result, we have $L$ sets of weights and biases $(W^{(\\ell)}, \\mathbf{b}^{(\\ell)})$ for $\\ell=1,\\dots, L$, corresponding to the layer inputs $Z^{(\\ell-1)}$ and outputs $Z^{(\\ell)}$ for $\\ell=1,\\dots, L$. Recall that each layer is an activation of a semi-affine transformation, $I^{(\\ell)}(Z^{(\\ell-1)}):=W^{(L)}Z^{(\\ell-1)}+ b^{(L)}$. The corresponding activation functions are denoted as $\\sigma^{(\\ell)}$. The activation function for the output layer is a softmax function, $\\sigma_s(x)$. \n",
    "\n",
    "Here we use the cross-entropy as the loss function, which is defined as \n",
    "$$ \\mathcal{L}:= -\\sum_{k=1}^{K}Y_{k}\\log \\hat{Y}_{k}.$$\n",
    "\n",
    "The relationship between the layers, for $\\ell\\in\\{1,\\dots, L\\}$ are:\n",
    "\n",
    "\\begin{align*}\n",
    "\\hat{Y} (X) & = Z^{(L)}=\\sigma_s(I^{(L)}) \\in [0,1]^{K},\\\\\n",
    "Z^{(\\ell)} & = \\sigma^{(\\ell)} \\left ( I^{(\\ell)} \\right ), ~\\ell=1,\\dots,L-1,\\\\\n",
    "Z^{(0)} & = X.\\\\\n",
    "\\end{align*}\n",
    "\n",
    "\n",
    "The update rules for the weights and biases are\n",
    "\\begin{align*}\n",
    "    \\Delta W^{(\\ell)} &=  - \\gamma \\nabla_{W^{(\\ell)}}\\mathcal{L},\\\\\n",
    "    \\Delta \\mathbf{b}^{(\\ell)} &=  - \\gamma \\nabla_{\\mathbf{b}^{(\\ell)}}\\mathcal{L}.\n",
    "\\end{align*}\n",
    "We now begin the Back-Propagation.\n",
    "\n",
    "For the gradient of $\\mathcal{L}$ w.r.t. $W^{(L)}$ we have\n",
    "\\begin{align*}\n",
    "    \\frac{\\partial \\mathcal{L}}{\\partial w_{ij}^{(L)}} &= \\sum_{k=1}^{K}\\frac{\\partial \\mathcal{L}}{\\partial Z_{k}^{(L)}} \\frac{\\partial Z_{k}^{(L)}}{\\partial w_{ij}^{(L)}}\\\\\n",
    "    %%%%%%%%%%%\n",
    "    &= \\sum_{k=1}^{K}\\frac{\\partial \\mathcal{L}}{\\partial Z_{k}^{(L)}} \\sum_{m=1}^{K}\\frac{\\partial Z_{k}^{(L)}}{\\partial I_{m}^{(L)}} \\frac{\\partial I_{m}^{(L)}}{\\partial w_{ij}^{(L)}}\n",
    "\\end{align*}\n",
    "But \n",
    "\\begin{align*}\n",
    "    \\frac{\\partial \\mathcal{L}}{\\partial Z_{k}^{(L)}} &= -\\frac{Y_{k}}{Z_{k}^{(L)}}\\\\\n",
    "    %%%%%%%%%%%%%%\n",
    "    \\frac{\\partial Z_{k}^{(L)}}{\\partial I_{m}^{(L)}} &= \\frac{\\partial}{\\partial I_{m}^{(L)}}[\\sigma(I^{(L)})]_{k}\\\\\n",
    "    %%%%%%%%%%%%%%\n",
    "    &= \\frac{\\partial}{\\partial I_{m}^{(L)}} \\frac{\\exp[I_{k}^{(L)}]}{\\sum_{n=1}^{K}\\exp[I_{n}^{(L)}]}\\\\\n",
    "    %%%%%%%%%%%%%%\n",
    "    &= \\begin{cases} \n",
    "    -\\frac{\\exp[I_{k}^{(L)}]}{\\sum_{n=1}^{K}\\exp[I_{n}^{(L)}]} \\frac{\\exp[I_{m}^{(L)}]}{\\sum_{n=1}^{K}\\exp[I_{n}^{(L)}]} & \\text{if } k \\neq m \\\\\n",
    "    \\frac{\\exp[I_{k}^{(L)}]}{\\sum_{n=1}^{K}\\exp[I_{n}^{(L)}]} - \\frac{\\exp[I_{k}^{(L)}]}\n",
    "    {\\sum_{n=1}^{K}\\exp[I_{n}^{(L)}]} \\frac{\\exp[I_{m}^{(L)}]}{\\sum_{n=1}^{K}\\exp[I_{n}^{(L)}]} \n",
    "    & \\text{otherwise}\n",
    "    \\end{cases}\\\\\n",
    "    %%%%%%%%%%%%%%\n",
    "    &= \\begin{cases} \n",
    "    -\\sigma_{k}\\sigma_{m}& \\text{if } k \\neq m \\\\\n",
    "     \\sigma_k(1 - \\sigma_m) & \\text{otherwise}\n",
    "    \\end{cases}\\\\\n",
    "    %%%%%%%%%%%%%%\n",
    "    &= \\sigma_k(\\delta_{km} - \\sigma_m) \\quad \\text{where} \\, \\delta_{km} \\, \\text{is the Kronecker's Delta}\\\\\n",
    "    %%%%%%%%%%%%%%\n",
    "    \\frac{\\partial I_{m}^{(L)}}{\\partial w_{ij}^{(L)}} &= \\delta_{mi}Z_{j}^{(L-1)}\\\\\n",
    "    %%%%%%%%%%%%%%\n",
    "    \\implies \\frac{\\partial \\mathcal{L}}{\\partial w_{ij}^{(L)}} &= -\\sum_{k=1}^{K}\\frac{Y_{k}}{Z_{k}^{(L)}} \n",
    "    \\sum_{m=1}^{K} Z_{m}^{(L)}(\\delta_{km} - Z_{m}^{(L)}) \\delta_{mi}Z_{j}^{(L-1)}\\\\\n",
    "    %%%%%%%%%%%%%%\n",
    "    &= -Z_{j}^{(L-1)} \\sum_{k=1}^{K}Y_{k}  (\\delta_{ki} - Z_{i}^{(L)}) \\\\\n",
    "    %%%%%%%%%%%%%%\n",
    "    &= Z_{j}^{(L-1)} (Z_{i}^{(L)}-Y_{i})\n",
    "\\end{align*}\n",
    "Where we have used the fact that $\\sum_{k=1}^{K}Y_{k}=1$ in the last equality.\n",
    "\n",
    "Similarly for $\\mathbf{b}^{(L)}$, we have\n",
    "\\begin{align*}\n",
    "    \\frac{\\partial \\mathcal{L}}{\\partial b_{i}^{(L)}} &= \\sum_{k=1}^{K}\\frac{\\partial \\mathcal{L}}{\\partial Z_{k}^{(L)}} \\sum_{m=1}^{K}\\frac{\\partial Z_{k}^{(L)}}{\\partial I_{m}^{(L)}} \\frac{\\partial I_{m}^{(L)}}{\\partial b_{i}^{(L)}}\\\\\n",
    "    %%%%%%%%%%%%%%%%%%\n",
    "    &= Z_{i}^{(L)}-Y_{i}\n",
    "\\end{align*}\n",
    "It follows that\n",
    "\\begin{align*}\n",
    "    \\nabla_{\\mathbf{b}^{(L)}}\\mathcal{L} &= Z^{(L)}-Y\\\\\n",
    "    \\nabla_{W^{(L)}}\\mathcal{L} &= \\nabla_{\\mathbf{b}^{(L)}}\\mathcal{L} \\otimes {Z^{(L-1)}}\n",
    "\\end{align*}\n",
    "Where $\\otimes$ denotes the outer product."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_uuid": "baff8933f9f9ab7421c873f385975555d58efcec"
   },
   "source": [
    "For the gradient of $\\mathcal{L}$ w.r.t. $W^{(L-1)}$ we have\n",
    "\\begin{align*}\n",
    "    \\frac{\\partial \\mathcal{L}}{\\partial w_{ij}^{(L-1)}} &= \\sum_{k=1}^{K}\\frac{\\partial L}{\\partial Z_{k}^{(L)}} \\frac{\\partial Z_{k}^{(L)}}{\\partial w_{ij}^{(L-1)}}\\\\\n",
    "    %%%%%%%%%%%\n",
    "    &= \\sum_{k=1}^{K}\\frac{\\partial \\mathcal{L}}{\\partial Z_{k}^{(L)}} \\sum_{m=1}^{K}\\frac{\\partial Z_{k}^{(L)}}{\\partial I_{m}^{(L)}} \\sum_{n=1}^{n^{(L-1)}} \\frac{\\partial I_{m}^{(L)}}{\\partial Z_{n}^{(L-1)}} \\sum_{p=1}^{n^{(L-1)}} \\frac{\\partial Z_{n}^{(L-1)}}{\\partial I_{p}^{(L-1)}} \\frac{\\partial I_{p}^{(L-1)}}{\\partial w_{ij}^{(L-1)}}\n",
    "\\end{align*}\n",
    "If we assume that $\\sigma^{(\\ell)}(x)=\\text{sigmoid}(x), ~\\ell \\in \\{1,\\dots, L-1\\}$ then \n",
    "\\begin{align*}\n",
    "    \\frac{\\partial I_{m}^{(L)}}{\\partial Z_{n}^{(L-1)}} &= w_{mn}^{(L)}\\\\\n",
    "    %%%%%%%%%%%%%%%%%\n",
    "    \\frac{\\partial Z_{n}^{(L-1)}}{\\partial I_{p}^{(L-1)}} &= \\frac{\\partial}{\\partial I_{p}^{(L-1)}}\\bigg(\\frac{1}{1+\\exp(-I_{n}^{(L-1)})}\\bigg)\\\\\n",
    "    %%%%%%%%%%%%%%%%%\n",
    "    &= \\frac{1}{1+\\exp(-I_{n}^{(L-1)})} \\frac{\\exp(-I_{n}^{(L-1)})}{1+\\exp(-I_{n}^{(L-1)})} \\, \\delta_{np} \\\\\n",
    "    %%%%%%%%%%%%%%%%%\n",
    "    &= Z_{n}^{(L-1)} (1-Z_{n}^{(L-1)}) \\, \\delta_{np} = \\sigma^{(L-1)}_n(1-\\sigma^{(L-1)}_n)\\delta_{np} \\\\\n",
    "    %%%%%%%%%%%%%%%%%\n",
    "    \\frac{\\partial I_{p}^{(L-1)}}{\\partial w_{ij}^{(L-1)}} &= \\delta_{pi} Z_{j}^{(L-2)} \\\\\n",
    "    %%%%%%%%%%%%%%%%%\n",
    "    \\implies \\frac{\\partial L}{\\partial w_{ij}^{(L)}} &= -\\sum_{k=1}^{K}\\frac{Y_{k}}{Z_{k}^{(L)}} \\sum_{m=1}^{K}Z_{k}^{(L)}(\\delta_{km} - Z_{m}^{(L)}) \\sum_{n=1}^{n^{(L-1)}} w_{mn}^{(L)} \\sum_{p=1}^{n^{(L-1)}} Z_{n}^{(L-1)} (1-Z_{n}^{(L-1)}) \\, \\delta_{np} \\delta_{pi} Z_{j}^{(L-2)} \\\\\n",
    "    %%%%%%%%%%%%%%%%%\n",
    "    &= -\\sum_{k=1}^{K}Y_{k} \\sum_{m=1}^{K}(\\delta_{km} - Z_{m}^{(L)}) \\sum_{n=1}^{n^{(L-1)}} w_{mn}^{(L)} Z_{n}^{(L-1)} (1-Z_{n}^{(L-1)}) \\, \\delta_{ni} Z_{j}^{(L-2)} \\\\\n",
    "    %%%%%%%%%%%%%%%%%\n",
    "    &= -\\sum_{k=1}^{K}Y_{k} \\sum_{m=1}^{K}(\\delta_{km} - Z_{m}^{(L)}) w_{mi}^{(L)} Z_{i}^{(L-2)} (1-Z_{i}^{(L-1)}) Z_{j}^{(L-2)} \\\\\n",
    "    %%%%%%%%%%%%%%%%%\n",
    "    &= -Z_{j}^{(L-2)}Z_{i}^{(L-1)}(1-Z_{i}^{(L-1)}) \\sum_{m=1}^{K} w_{mi}^{(L)} \\sum_{k=1}^{K}(\\delta_{km}Y_{k} - Z_{m}^{(L)}Y_{k}) \\\\\n",
    "    %%%%%%%%%%%%%%%%%\n",
    "    &= Z_{j}^{(L-2)}Z_{i}^{(L-1)} (1-Z_{i}^{(L-1)}) \\sum_{m=1}^{K} w_{mi}^{(L)} (Z_{m}^{(L)} - Y_{m}) \\\\\n",
    "    %%%%%%%%%%%%%%%%%\n",
    "    &= Z_{j}^{(L-2)}Z_{i}^{(L-1)} (1-Z_{i}^{(L-1)}) (Z^{(L)} - Y)^{T} \\mathbf{w}_{,i}^{(L)} \\\\\n",
    "\\end{align*}\n",
    "Similarly we have\n",
    "$$ \\frac{\\partial \\mathcal{L}}{\\partial b_{i}^{(L-1)}} = Z_{i}^{(L-1)} (1-Z_{i}^{(L-1)}) (Z^{(L)} - Y)^{T} \\mathbf{w}_{,i}^{(L)}. $$\n",
    "It follows that we can define the following recursion relation for the loss gradient:\n",
    "\n",
    "\\begin{align*}\n",
    "    \\nabla_{b^{(L-1)}}\\mathcal{L} &= Z^{(L-1)} \\circ (\\mathbf{1}-Z^{(L-1)}) \\circ \n",
    "    ({W^{(L)}}^{T} \\nabla_{b^{(L)}}\\mathcal{L}) \\\\\n",
    "    \\nabla_{W^{(L-1)}}\\mathcal{L} &= \\nabla_{b^{(L-1)}}\\mathcal{L} \\otimes Z^{(L-2)}\\\\\n",
    "    & = Z^{(L-1)} \\circ (\\mathbf{1}-Z^{(L-1)}) \\circ \n",
    "    ({W^{(L)}}^{T} \\nabla_{W^{(L)}}\\mathcal{L})\n",
    "\\end{align*}\n",
    "\n",
    "Where $\\circ$ denotes the Hadamard Product (elementwise multiplication). This recursion relation generalizes for all layers. To see this, let the back-propagation error $\\delta^{(\\ell)}:=\\nabla_{b^{(\\ell)}}\\mathcal{L}$, and since\n",
    "\n",
    "\\begin{align*}\n",
    "\\left[\\frac{\\partial \\sigma^{(\\ell)}}{\\partial I^{(\\ell)}}\\right]_{ij}&=\\frac{\\partial \\sigma_i^{(\\ell)}}{\\partial I_j^{(\\ell)}}\\\\\n",
    "&=\\sigma_i^{(\\ell)}(1-\\sigma_i^{(\\ell)})\\delta_{ij}\\\\\n",
    "\\end{align*}\n",
    "Equivalently in matrix-vector form\n",
    "$$\\nabla_{I^{(\\ell)}} \\sigma^{(\\ell)}=\\text{diag}(\\sigma^{(\\ell)} \\circ (\\mathbf{1}-\\sigma^{(\\ell)})).$$\n",
    "\n",
    "We can write, in general, for any choice of activation function for the hidden layer,\n",
    "\n",
    "$$ \\delta^{(\\ell)}=(\\nabla_{I^{(\\ell)}} \\sigma^{(\\ell)})(W^{(\\ell+1)})^T\\delta^{(\\ell+1)}.$$\n",
    "and\n",
    "\n",
    "$$\\nabla_{W^{(\\ell)}}\\mathcal{L} = \\delta^{(\\ell)} \\otimes Z^{(\\ell-1)}.$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Backpropagation Example\n",
    "Here we define a three layer feed-forward network as in the Example given in Chapter 4.\n",
    "On each pass through the training loop, the training input is fed forward through the network to calculate the loss, then the gradient of the loss with respect to each of the weights is calculated and the weights updated for the next pass through the loop.\n",
    "\n",
    "Below, we will compare this backpropagation implementation's results with Keras"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {
    "_uuid": "d5e7c8262ea14a5f1e3678534442f0c6292486f2"
   },
   "outputs": [],
   "source": [
    "def myANN(Y, Xtrain, Xpred, W01, W02, W03, b01, b02, b03):\n",
    "    # Initialization of Weights and Biases\n",
    "    W1 = copy.copy(W01)\n",
    "    W2 = copy.copy(W02)\n",
    "    W3 = copy.copy(W03)\n",
    "    b1 = copy.copy(b01)\n",
    "    b2 = copy.copy(b02)\n",
    "    b3 = copy.copy(b03)\n",
    "    \n",
    "    # Initialize ad hoc variables\n",
    "    k = 1\n",
    "    change = 999\n",
    "    \n",
    "    # Begin the training loop\n",
    "    while (change > 0.001 and k < 201):\n",
    "        print(\"Iteration\", k)\n",
    "        \n",
    "        ## Begin Feedforward (assume learning rate is one)\n",
    "        # Hidden Layer 1\n",
    "        Z1 = sigmoid(W1 @ Xtrain + b1)\n",
    "        # Hidden Layer 2\n",
    "        Z2 = sigmoid(W2 @ Z1 + b2)\n",
    "        # Output Layer\n",
    "        Yhat = softmax(W3 @ Z2 + b3)\n",
    "        # Find cross-entropy loss\n",
    "        loss = -Y @ np.log(Yhat)\n",
    "        print(\"Current Loss:\",loss)\n",
    "        \n",
    "        ## Find gradient of loss with respect to the weights\n",
    "        # Output Later\n",
    "        dLdb3 = Yhat - Y \n",
    "        dLdW3 = np.outer(dLdb3, Z2)\n",
    "        # Hidden Layer 2\n",
    "        dLdb2 = (W3.T @ (dLdb3)) * Z2 * (1-Z2)\n",
    "        dLdW2 = np.outer(dLdb2,Z1)\n",
    "        # Hidden Layer 1\n",
    "        dLdb1 = (W2.T @ (dLdb2)) * Z1 * (1-Z1)\n",
    "        dLdW1 = np.outer(dLdb1, Xtrain)\n",
    "        \n",
    "        ## Update Weights by Back Propagation\n",
    "        # Output Layer\n",
    "        b3 -= dLdb3 # (learning rate is one)\n",
    "        W3 -= dLdW3\n",
    "        # Hidden Layer 2\n",
    "        b2 -= dLdb2\n",
    "        W2 -= dLdW2\n",
    "        # Hidden Layer 1\n",
    "        b1 -= dLdb1\n",
    "        W1 -= dLdW1\n",
    "        \n",
    "        change = norm(dLdb1)+norm(dLdb2)+norm(dLdb3)+norm(dLdW1)+norm(dLdW2)+norm(dLdW3)\n",
    "        k += 1\n",
    "        \n",
    "    Z1pred = W1 @ Xpred + b1\n",
    "    Z2pred = W2 @ sigmoid(Z1pred) + b2\n",
    "    Z3pred = W3 @ sigmoid(Z2pred) + b3\n",
    "    Ypred = softmax(Z3pred)\n",
    "    print(\"\")\n",
    "    print(\"Summary\")\n",
    "    print(\"Target Y \\n\", Y)\n",
    "    print(\"Fitted Ytrain \\n\", Yhat)\n",
    "    print(\"Xpred\\n\", Xpred)\n",
    "    print(\"Fitted Ypred \\n\", Ypred)\n",
    "    print(\"Weight Matrix 1 \\n\", W1)\n",
    "    print(\"Bias Vector 1 \\n\", b1)\n",
    "    print(\"Weight Matrix 2 \\n\", W2)\n",
    "    print(\"Bias Vector 2 \\n\", b2)\n",
    "    print(\"Weight Matrix 3 \\n\", W3)\n",
    "    print(\"Bias Vector 3 \\n\", b3)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Defining the initial weights of the network prior to training:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {
    "_uuid": "5c41fd636007d1448a7a3d9e8481a417e771faec"
   },
   "outputs": [],
   "source": [
    "W0_1 = np.array([[0.1,0.3,0.7], [0.9,0.4,0.4]])\n",
    "b_1 = np.array([1.,1.])\n",
    "\n",
    "W0_2 = np.array([[0.4,0.3], [0.7,0.2]])\n",
    "b_2 = np.array([1.,1.])\n",
    "\n",
    "W0_3 = np.array([[0.5,0.6], [0.6,0.7], [0.3,0.2]])\n",
    "b_3 = np.array([1.,1.,1.]) "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Defining the training input and the desired output of the model:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_train = np.array([0.1,0.7,0.3])\n",
    "YY = np.array([1.,0.,0.])\n",
    "X_pred = X_train"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Finally we can run the input through the model, updating the weights on each iteration."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {
    "_uuid": "9807df2b46c07bcd1f58c6e19e75b47294911283",
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Iteration 1\n",
      "Current Loss: 1.0245882244632154\n",
      "Iteration 2\n",
      "Current Loss: 0.15485530792507038\n",
      "Iteration 3\n",
      "Current Loss: 0.09392421885330306\n",
      "Iteration 4\n",
      "Current Loss: 0.06800390179198364\n",
      "Iteration 5\n",
      "Current Loss: 0.05342702439392962\n",
      "Iteration 6\n",
      "Current Loss: 0.04403516135363163\n",
      "Iteration 7\n",
      "Current Loss: 0.037464385843672066\n",
      "Iteration 8\n",
      "Current Loss: 0.03260386527878382\n",
      "Iteration 9\n",
      "Current Loss: 0.02886032281105674\n",
      "Iteration 10\n",
      "Current Loss: 0.025887274175115597\n",
      "Iteration 11\n",
      "Current Loss: 0.023468486741688343\n",
      "Iteration 12\n",
      "Current Loss: 0.021461901907750262\n",
      "Iteration 13\n",
      "Current Loss: 0.019770281829020363\n",
      "Iteration 14\n",
      "Current Loss: 0.01832479491681583\n",
      "Iteration 15\n",
      "Current Loss: 0.017075333031958018\n",
      "Iteration 16\n",
      "Current Loss: 0.0159845417596331\n",
      "Iteration 17\n",
      "Current Loss: 0.015024000543215876\n",
      "Iteration 18\n",
      "Current Loss: 0.014171699633555721\n",
      "Iteration 19\n",
      "Current Loss: 0.013410327234309708\n",
      "Iteration 20\n",
      "Current Loss: 0.012726078464642231\n",
      "Iteration 21\n",
      "Current Loss: 0.012107809483423326\n",
      "Iteration 22\n",
      "Current Loss: 0.011546425364276336\n",
      "Iteration 23\n",
      "Current Loss: 0.011034429625453819\n",
      "Iteration 24\n",
      "Current Loss: 0.010565587675554136\n",
      "Iteration 25\n",
      "Current Loss: 0.010134671905499095\n",
      "Iteration 26\n",
      "Current Loss: 0.009737266203781016\n",
      "Iteration 27\n",
      "Current Loss: 0.009369614329770404\n",
      "Iteration 28\n",
      "Current Loss: 0.009028501073652552\n",
      "Iteration 29\n",
      "Current Loss: 0.008711158215957316\n",
      "Iteration 30\n",
      "Current Loss: 0.008415189449466124\n",
      "Iteration 31\n",
      "Current Loss: 0.008138509946039773\n",
      "Iteration 32\n",
      "Current Loss: 0.007879297339396041\n",
      "Iteration 33\n",
      "Current Loss: 0.007635951683939785\n",
      "Iteration 34\n",
      "Current Loss: 0.0074070625282576415\n",
      "Iteration 35\n",
      "Current Loss: 0.007191381670469707\n",
      "Iteration 36\n",
      "Current Loss: 0.0069878004832728035\n",
      "Iteration 37\n",
      "Current Loss: 0.006795330938602875\n",
      "Iteration 38\n",
      "Current Loss: 0.006613089646199968\n",
      "Iteration 39\n",
      "Current Loss: 0.00644028436190301\n",
      "Iteration 40\n",
      "Current Loss: 0.00627620253098548\n",
      "Iteration 41\n",
      "Current Loss: 0.006120201517150355\n",
      "Iteration 42\n",
      "Current Loss: 0.0059717002347349744\n",
      "Iteration 43\n",
      "Current Loss: 0.005830171954493363\n",
      "Iteration 44\n",
      "Current Loss: 0.005695138095313766\n",
      "Iteration 45\n",
      "Current Loss: 0.005566162847736158\n",
      "Iteration 46\n",
      "Current Loss: 0.005442848502093459\n",
      "Iteration 47\n",
      "Current Loss: 0.005324831375835792\n",
      "Iteration 48\n",
      "Current Loss: 0.005211778252259988\n",
      "Iteration 49\n",
      "Current Loss: 0.0051033832572613\n",
      "Iteration 50\n",
      "Current Loss: 0.004999365112517402\n",
      "Iteration 51\n",
      "Current Loss: 0.0048994647132164495\n",
      "Iteration 52\n",
      "Current Loss: 0.004803442986460101\n",
      "Iteration 53\n",
      "Current Loss: 0.00471107899311831\n",
      "Iteration 54\n",
      "Current Loss: 0.00462216824145271\n",
      "Iteration 55\n",
      "Current Loss: 0.00453652118544789\n",
      "Iteration 56\n",
      "Current Loss: 0.004453961884675663\n",
      "Iteration 57\n",
      "Current Loss: 0.004374326805777462\n",
      "Iteration 58\n",
      "Current Loss: 0.0042974637484118755\n",
      "Iteration 59\n",
      "Current Loss: 0.004223230880841465\n",
      "Iteration 60\n",
      "Current Loss: 0.004151495872327611\n",
      "Iteration 61\n",
      "Current Loss: 0.00408213511117988\n",
      "Iteration 62\n",
      "Current Loss: 0.004015032998756381\n",
      "Iteration 63\n",
      "Current Loss: 0.003950081310948609\n",
      "Iteration 64\n",
      "Current Loss: 0.0038871786197421337\n",
      "Iteration 65\n",
      "Current Loss: 0.0038262297683633838\n",
      "Iteration 66\n",
      "Current Loss: 0.003767145394309078\n",
      "Iteration 67\n",
      "Current Loss: 0.0037098414952397496\n",
      "Iteration 68\n",
      "Current Loss: 0.0036542390333093814\n",
      "Iteration 69\n",
      "Current Loss: 0.003600263574021281\n",
      "Iteration 70\n",
      "Current Loss: 0.003547844956144895\n",
      "Iteration 71\n",
      "Current Loss: 0.0034969169896222414\n",
      "Iteration 72\n",
      "Current Loss: 0.003447417178733045\n",
      "Iteration 73\n",
      "Current Loss: 0.0033992864680910564\n",
      "Iteration 74\n",
      "Current Loss: 0.003352469009304041\n",
      "Iteration 75\n",
      "Current Loss: 0.0033069119463623483\n",
      "Iteration 76\n",
      "Current Loss: 0.003262565218026462\n",
      "Iteration 77\n",
      "Current Loss: 0.003219381375664246\n",
      "Iteration 78\n",
      "Current Loss: 0.0031773154151463557\n",
      "Iteration 79\n",
      "Current Loss: 0.00313632462154941\n",
      "Iteration 80\n",
      "Current Loss: 0.003096368425546633\n",
      "Iteration 81\n",
      "Current Loss: 0.0030574082704677512\n",
      "Iteration 82\n",
      "Current Loss: 0.0030194074891197573\n",
      "Iteration 83\n",
      "Current Loss: 0.0029823311895385484\n",
      "Iteration 84\n",
      "Current Loss: 0.0029461461489299653\n",
      "Iteration 85\n",
      "Current Loss: 0.0029108207151179867\n",
      "Iteration 86\n",
      "Current Loss: 0.0028763247148933494\n",
      "Iteration 87\n",
      "Current Loss: 0.002842629368699903\n",
      "Iteration 88\n",
      "Current Loss: 0.0028097072111591057\n",
      "Iteration 89\n",
      "Current Loss: 0.0027775320169677402\n",
      "Iteration 90\n",
      "Current Loss: 0.0027460787317498763\n",
      "Iteration 91\n",
      "Current Loss: 0.002715323407485242\n",
      "Iteration 92\n",
      "Current Loss: 0.002685243142160879\n",
      "Iteration 93\n",
      "Current Loss: 0.0026558160233255356\n",
      "Iteration 94\n",
      "Current Loss: 0.002627021075262604\n",
      "Iteration 95\n",
      "Current Loss: 0.002598838209507203\n",
      "Iteration 96\n",
      "Current Loss: 0.002571248178465404\n",
      "Iteration 97\n",
      "Current Loss: 0.002544232531912161\n",
      "Iteration 98\n",
      "Current Loss: 0.002517773576160802\n",
      "Iteration 99\n",
      "Current Loss: 0.002491854335713814\n",
      "Iteration 100\n",
      "Current Loss: 0.0024664585172226904\n",
      "Iteration 101\n",
      "Current Loss: 0.002441570475594441\n",
      "Iteration 102\n",
      "Current Loss: 0.0024171751820989333\n",
      "Iteration 103\n",
      "Current Loss: 0.002393258194338355\n",
      "Iteration 104\n",
      "Current Loss: 0.0023698056279554377\n",
      "Iteration 105\n",
      "Current Loss: 0.002346804129964487\n",
      "Iteration 106\n",
      "Current Loss: 0.002324240853592821\n",
      "Iteration 107\n",
      "Current Loss: 0.002302103434542178\n",
      "Iteration 108\n",
      "Current Loss: 0.002280379968568292\n",
      "Iteration 109\n",
      "Current Loss: 0.0022590589903020023\n",
      "Iteration 110\n",
      "Current Loss: 0.002238129453227335\n",
      "Iteration 111\n",
      "Current Loss: 0.0022175807107444856\n",
      "Iteration 112\n",
      "Current Loss: 0.0021974024982524694\n",
      "Iteration 113\n",
      "Current Loss: 0.002177584916184018\n",
      "Iteration 114\n",
      "Current Loss: 0.0021581184139373576\n",
      "Iteration 115\n",
      "Current Loss: 0.002138993774648413\n",
      "Iteration 116\n",
      "Current Loss: 0.0021202021007520355\n",
      "Iteration 117\n",
      "Current Loss: 0.00210173480028876\n",
      "Iteration 118\n",
      "Current Loss: 0.0020835835739061873\n",
      "Iteration 119\n",
      "Current Loss: 0.002065740402521113\n",
      "Iteration 120\n",
      "Current Loss: 0.002048197535597659\n",
      "Iteration 121\n",
      "Current Loss: 0.002030947480011244\n",
      "Iteration 122\n",
      "Current Loss: 0.0020139829894584697\n",
      "Iteration 123\n",
      "Current Loss: 0.0019972970543886896\n",
      "Iteration 124\n",
      "Current Loss: 0.001980882892421701\n",
      "Iteration 125\n",
      "Current Loss: 0.001964733939227919\n",
      "Iteration 126\n",
      "Current Loss: 0.0019488438398435075\n",
      "Iteration 127\n",
      "Current Loss: 0.001933206440397736\n",
      "Iteration 128\n",
      "Current Loss: 0.0019178157802291818\n",
      "Iteration 129\n",
      "Current Loss: 0.001902666084369175\n",
      "Iteration 130\n",
      "Current Loss: 0.0018877517563733507\n",
      "Iteration 131\n",
      "Current Loss: 0.0018730673714834013\n",
      "Iteration 132\n",
      "Current Loss: 0.0018586076700990112\n",
      "Iteration 133\n",
      "Current Loss: 0.001844367551547314\n",
      "Iteration 134\n",
      "Current Loss: 0.001830342068129313\n",
      "Iteration 135\n",
      "Current Loss: 0.0018165264194347297\n",
      "Iteration 136\n",
      "Current Loss: 0.0018029159469067416\n",
      "Iteration 137\n",
      "Current Loss: 0.0017895061286454063\n",
      "Iteration 138\n",
      "Current Loss: 0.0017762925744375915\n",
      "Iteration 139\n",
      "Current Loss: 0.001763271021004328\n",
      "Iteration 140\n",
      "Current Loss: 0.0017504373274487383\n",
      "Iteration 141\n",
      "Current Loss: 0.0017377874709015937\n",
      "Iteration 142\n",
      "Current Loss: 0.0017253175423493153\n",
      "Iteration 143\n",
      "Current Loss: 0.0017130237426385964\n",
      "Iteration 144\n",
      "Current Loss: 0.0017009023786474721\n",
      "Iteration 145\n",
      "Current Loss: 0.0016889498596160144\n",
      "Iteration 146\n",
      "Current Loss: 0.0016771626936267106\n",
      "Iteration 147\n",
      "Current Loss: 0.0016655374842323772\n",
      "Iteration 148\n",
      "Current Loss: 0.0016540709272153357\n",
      "Iteration 149\n",
      "Current Loss: 0.0016427598074842667\n",
      "Iteration 150\n",
      "Current Loss: 0.001631600996090364\n",
      "Iteration 151\n",
      "Current Loss: 0.001620591447365091\n",
      "Iteration 152\n",
      "Current Loss: 0.0016097281961701734\n",
      "Iteration 153\n",
      "Current Loss: 0.0015990083552566876\n",
      "Iteration 154\n",
      "Current Loss: 0.001588429112725769\n",
      "Iteration 155\n",
      "Current Loss: 0.0015779877295900278\n",
      "Iteration 156\n",
      "Current Loss: 0.0015676815374272\n",
      "Iteration 157\n",
      "Current Loss: 0.0015575079361249001\n",
      "Iteration 158\n",
      "Current Loss: 0.0015474643917100068\n",
      "Iteration 159\n",
      "Current Loss: 0.001537548434262775\n",
      "Iteration 160\n",
      "Current Loss: 0.0015277576559055382\n",
      "Iteration 161\n",
      "Current Loss: 0.0015180897088707657\n",
      "Iteration 162\n",
      "Current Loss: 0.0015085423036367846\n",
      "Iteration 163\n",
      "Current Loss: 0.001499113207136711\n",
      "Iteration 164\n",
      "Current Loss: 0.0014898002410292355\n",
      "Iteration 165\n",
      "Current Loss: 0.001480601280033919\n",
      "Iteration 166\n",
      "Current Loss: 0.0014715142503285402\n",
      "Iteration 167\n",
      "Current Loss: 0.001462537128000921\n",
      "Iteration 168\n",
      "Current Loss: 0.0014536679375592241\n",
      "Iteration 169\n",
      "Current Loss: 0.0014449047504928164\n",
      "Iteration 170\n",
      "Current Loss: 0.0014362456838866904\n",
      "Iteration 171\n",
      "Current Loss: 0.0014276888990818735\n",
      "Iteration 172\n",
      "Current Loss: 0.0014192326003850439\n",
      "Iteration 173\n",
      "Current Loss: 0.001410875033821779\n",
      "Iteration 174\n",
      "Current Loss: 0.0014026144859331027\n",
      "Iteration 175\n",
      "Current Loss: 0.0013944492826132057\n",
      "Iteration 176\n",
      "Current Loss: 0.0013863777879884443\n",
      "Iteration 177\n",
      "Current Loss: 0.0013783984033319426\n",
      "Iteration 178\n",
      "Current Loss: 0.0013705095660166817\n",
      "Iteration 179\n",
      "Current Loss: 0.001362709748503731\n",
      "Iteration 180\n",
      "Current Loss: 0.0013549974573630647\n",
      "Iteration 181\n",
      "Current Loss: 0.0013473712323287318\n",
      "Iteration 182\n",
      "Current Loss: 0.0013398296453834848\n",
      "Iteration 183\n",
      "Current Loss: 0.0013323712998748638\n",
      "Iteration 184\n",
      "Current Loss: 0.0013249948296592828\n",
      "Iteration 185\n",
      "Current Loss: 0.0013176988982745602\n",
      "Iteration 186\n",
      "Current Loss: 0.0013104821981389993\n",
      "Iteration 187\n",
      "Current Loss: 0.0013033434497765686\n",
      "Iteration 188\n",
      "Current Loss: 0.0012962814010651779\n",
      "Iteration 189\n",
      "Current Loss: 0.001289294826512714\n",
      "Iteration 190\n",
      "Current Loss: 0.0012823825265512756\n",
      "Iteration 191\n",
      "Current Loss: 0.0012755433268569337\n",
      "Iteration 192\n",
      "Current Loss: 0.0012687760776901333\n",
      "Iteration 193\n",
      "Current Loss: 0.0012620796532560573\n",
      "Iteration 194\n",
      "Current Loss: 0.0012554529510845087\n",
      "Iteration 195\n",
      "Current Loss: 0.0012488948914308661\n",
      "Iteration 196\n",
      "Current Loss: 0.0012424044166928826\n",
      "Iteration 197\n",
      "Current Loss: 0.0012359804908474389\n",
      "Iteration 198\n",
      "Current Loss: 0.0012296220989031369\n",
      "Iteration 199\n",
      "Current Loss: 0.0012233282463698413\n",
      "Iteration 200\n",
      "Current Loss: 0.0012170979587443902\n",
      "\n",
      "Summary\n",
      "Target Y \n",
      " [1. 0. 0.]\n",
      "Fitted Ytrain \n",
      " [9.98783642e-01 6.22204934e-04 5.94152662e-04]\n",
      "Xpred\n",
      " [0.1 0.7 0.3]\n",
      "Fitted Ypred \n",
      " [9.98789803e-01 6.19037496e-04 5.91159904e-04]\n",
      "Weight Matrix 1 \n",
      " [[0.10459032 0.33213226 0.71377097]\n",
      " [0.90239788 0.41678516 0.40719364]]\n",
      "Bias Vector 1 \n",
      " [1.04590323 1.02397881]\n",
      "Weight Matrix 2 \n",
      " [[0.59220562 0.49371315]\n",
      " [0.88894064 0.39044529]]\n",
      "Bias Vector 2 \n",
      " [1.23662517 1.23265924]\n",
      "Weight Matrix 3 \n",
      " [[ 2.10529038  2.24330927]\n",
      " [-0.31430366 -0.23618653]\n",
      " [-0.39098672 -0.50712274]]\n",
      "Bias Vector 3 \n",
      " [ 2.89373523 -0.08099052  0.18725529]\n"
     ]
    }
   ],
   "source": [
    "myANN(YY, X_train, X_pred, W0_1, W0_2, W0_3, b_1, b_2, b_3)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Implementing the Model with Keras"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "To recreate our example neural network with Keras, we must first import the components of the package necessary to build our model. Keras provides a rich suite of model architectures, layers, activation functions and other building blocks for creating deep learning models.\n",
    "\n",
    "As we are creating a typical sequential network with three densely connected layers, we simply need the classes for the `Sequential` model and the `Dense` layer.\n",
    "\n",
    "For training, we will instantiate a stochastic gradient descent optimiser, setting its learning rate to 1 as above. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {
    "_uuid": "3452568af4e2b0eef09b05faf226e7ca100d4340",
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Using TensorFlow backend.\n"
     ]
    }
   ],
   "source": [
    "from keras.models import Sequential\n",
    "from keras.layers import Dense\n",
    "from keras.optimizers import SGD"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Creating the model, we define its architecture by adding the layers in the order they are applied to the inputs, specifying their activation functions, and initialising their weights to the values defined above. Note that the weights are transposed:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [],
   "source": [
    "model = Sequential()\n",
    "model.add(Dense(2, input_dim=3, activation='sigmoid', weights = [W0_1.T, b_1]))\n",
    "model.add(Dense(2, activation='sigmoid', weights = [W0_2.T, b_2]))\n",
    "model.add(Dense(3, activation='softmax', weights = [W0_3.T, b_3]))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now we can compile the model. This is where the learning strategy for the model is defined. \n",
    "\n",
    "We provide Keras with the objective function (`loss`) to minimise during training, and the optimisation algorithm to apply in order to do so. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {
    "_uuid": "c3022e1a7b78fbe0ca1bf116e3e96882b4f2a7c9"
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[array([[0.1, 0.9],\n",
       "        [0.3, 0.4],\n",
       "        [0.7, 0.4]], dtype=float32),\n",
       " array([1., 1.], dtype=float32),\n",
       " array([[0.4, 0.7],\n",
       "        [0.3, 0.2]], dtype=float32),\n",
       " array([1., 1.], dtype=float32),\n",
       " array([[0.5, 0.6, 0.3],\n",
       "        [0.6, 0.7, 0.2]], dtype=float32),\n",
       " array([1., 1., 1.], dtype=float32)]"
      ]
     },
     "execution_count": 13,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "sgd = SGD(lr=1)\n",
    "model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['categorical_crossentropy'])\n",
    "model.get_weights()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Finally, we can fit the compiled model to our training data. Note that Keras takes a single input instance as a column vector."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {
    "_uuid": "67897067fabc2f208b360a454d5a524940d34f2f"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch 1/200\n",
      "1/1 [==============================] - 2s 2s/step - loss: 1.0246 - categorical_crossentropy: 1.0246\n",
      "Epoch 2/200\n",
      "1/1 [==============================] - 0s 13ms/step - loss: 0.1549 - categorical_crossentropy: 0.1549\n",
      "Epoch 3/200\n",
      "1/1 [==============================] - 0s 24ms/step - loss: 0.0939 - categorical_crossentropy: 0.0939\n",
      "Epoch 4/200\n",
      "1/1 [==============================] - 0s 2ms/step - loss: 0.0680 - categorical_crossentropy: 0.0680\n",
      "Epoch 5/200\n",
      "1/1 [==============================] - 0s 3ms/step - loss: 0.0534 - categorical_crossentropy: 0.0534\n",
      "Epoch 6/200\n",
      "1/1 [==============================] - 0s 20ms/step - loss: 0.0440 - categorical_crossentropy: 0.0440\n",
      "Epoch 7/200\n",
      "1/1 [==============================] - 0s 3ms/step - loss: 0.0375 - categorical_crossentropy: 0.0375\n",
      "Epoch 8/200\n",
      "1/1 [==============================] - 0s 17ms/step - loss: 0.0326 - categorical_crossentropy: 0.0326\n",
      "Epoch 9/200\n",
      "1/1 [==============================] - 0s 4ms/step - loss: 0.0289 - categorical_crossentropy: 0.0289\n",
      "Epoch 10/200\n",
      "1/1 [==============================] - 0s 18ms/step - loss: 0.0259 - categorical_crossentropy: 0.0259\n",
      "Epoch 11/200\n",
      "1/1 [==============================] - 0s 30ms/step - loss: 0.0235 - categorical_crossentropy: 0.0235\n",
      "Epoch 12/200\n",
      "1/1 [==============================] - 0s 6ms/step - loss: 0.0215 - categorical_crossentropy: 0.0215\n",
      "Epoch 13/200\n",
      "1/1 [==============================] - 0s 22ms/step - loss: 0.0198 - categorical_crossentropy: 0.0198\n",
      "Epoch 14/200\n",
      "1/1 [==============================] - 0s 5ms/step - loss: 0.0183 - categorical_crossentropy: 0.0183\n",
      "Epoch 15/200\n",
      "1/1 [==============================] - 0s 25ms/step - loss: 0.0171 - categorical_crossentropy: 0.0171\n",
      "Epoch 16/200\n",
      "1/1 [==============================] - 0s 7ms/step - loss: 0.0160 - categorical_crossentropy: 0.0160\n",
      "Epoch 17/200\n",
      "1/1 [==============================] - 0s 3ms/step - loss: 0.0150 - categorical_crossentropy: 0.0150\n",
      "Epoch 18/200\n",
      "1/1 [==============================] - 0s 4ms/step - loss: 0.0142 - categorical_crossentropy: 0.0142\n",
      "Epoch 19/200\n",
      "1/1 [==============================] - 0s 21ms/step - loss: 0.0134 - categorical_crossentropy: 0.0134\n",
      "Epoch 20/200\n",
      "1/1 [==============================] - 0s 26ms/step - loss: 0.0127 - categorical_crossentropy: 0.0127\n",
      "Epoch 21/200\n",
      "1/1 [==============================] - 0s 30ms/step - loss: 0.0121 - categorical_crossentropy: 0.0121\n",
      "Epoch 22/200\n",
      "1/1 [==============================] - 0s 3ms/step - loss: 0.0115 - categorical_crossentropy: 0.0115\n",
      "Epoch 23/200\n",
      "1/1 [==============================] - 0s 11ms/step - loss: 0.0110 - categorical_crossentropy: 0.0110\n",
      "Epoch 24/200\n",
      "1/1 [==============================] - 0s 5ms/step - loss: 0.0106 - categorical_crossentropy: 0.0106\n",
      "Epoch 25/200\n",
      "1/1 [==============================] - 0s 12ms/step - loss: 0.0101 - categorical_crossentropy: 0.0101\n",
      "Epoch 26/200\n",
      "1/1 [==============================] - 0s 15ms/step - loss: 0.0097 - categorical_crossentropy: 0.0097\n",
      "Epoch 27/200\n",
      "1/1 [==============================] - 0s 15ms/step - loss: 0.0094 - categorical_crossentropy: 0.0094\n",
      "Epoch 28/200\n",
      "1/1 [==============================] - 0s 15ms/step - loss: 0.0090 - categorical_crossentropy: 0.0090\n",
      "Epoch 29/200\n",
      "1/1 [==============================] - 0s 5ms/step - loss: 0.0087 - categorical_crossentropy: 0.0087\n",
      "Epoch 30/200\n",
      "1/1 [==============================] - 0s 6ms/step - loss: 0.0084 - categorical_crossentropy: 0.0084\n",
      "Epoch 31/200\n",
      "1/1 [==============================] - 0s 8ms/step - loss: 0.0081 - categorical_crossentropy: 0.0081\n",
      "Epoch 32/200\n",
      "1/1 [==============================] - 0s 4ms/step - loss: 0.0079 - categorical_crossentropy: 0.0079\n",
      "Epoch 33/200\n",
      "1/1 [==============================] - 0s 5ms/step - loss: 0.0076 - categorical_crossentropy: 0.0076\n",
      "Epoch 34/200\n",
      "1/1 [==============================] - 0s 4ms/step - loss: 0.0074 - categorical_crossentropy: 0.0074\n",
      "Epoch 35/200\n",
      "1/1 [==============================] - 0s 6ms/step - loss: 0.0072 - categorical_crossentropy: 0.0072\n",
      "Epoch 36/200\n",
      "1/1 [==============================] - 0s 4ms/step - loss: 0.0070 - categorical_crossentropy: 0.0070\n",
      "Epoch 37/200\n",
      "1/1 [==============================] - 0s 27ms/step - loss: 0.0068 - categorical_crossentropy: 0.0068\n",
      "Epoch 38/200\n",
      "1/1 [==============================] - 0s 6ms/step - loss: 0.0066 - categorical_crossentropy: 0.0066\n",
      "Epoch 39/200\n",
      "1/1 [==============================] - 0s 6ms/step - loss: 0.0064 - categorical_crossentropy: 0.0064\n",
      "Epoch 40/200\n",
      "1/1 [==============================] - 0s 4ms/step - loss: 0.0063 - categorical_crossentropy: 0.0063\n",
      "Epoch 41/200\n",
      "1/1 [==============================] - 0s 5ms/step - loss: 0.0061 - categorical_crossentropy: 0.0061\n",
      "Epoch 42/200\n",
      "1/1 [==============================] - 0s 8ms/step - loss: 0.0060 - categorical_crossentropy: 0.0060\n",
      "Epoch 43/200\n",
      "1/1 [==============================] - 0s 7ms/step - loss: 0.0058 - categorical_crossentropy: 0.0058\n",
      "Epoch 44/200\n",
      "1/1 [==============================] - 0s 5ms/step - loss: 0.0057 - categorical_crossentropy: 0.0057\n",
      "Epoch 45/200\n",
      "1/1 [==============================] - 0s 7ms/step - loss: 0.0056 - categorical_crossentropy: 0.0056\n",
      "Epoch 46/200\n",
      "1/1 [==============================] - 0s 6ms/step - loss: 0.0054 - categorical_crossentropy: 0.0054\n",
      "Epoch 47/200\n",
      "1/1 [==============================] - 0s 28ms/step - loss: 0.0053 - categorical_crossentropy: 0.0053\n",
      "Epoch 48/200\n",
      "1/1 [==============================] - 0s 12ms/step - loss: 0.0052 - categorical_crossentropy: 0.0052\n",
      "Epoch 49/200\n",
      "1/1 [==============================] - 0s 6ms/step - loss: 0.0051 - categorical_crossentropy: 0.0051\n",
      "Epoch 50/200\n",
      "1/1 [==============================] - 0s 15ms/step - loss: 0.0050 - categorical_crossentropy: 0.0050\n",
      "Epoch 51/200\n",
      "1/1 [==============================] - 0s 8ms/step - loss: 0.0049 - categorical_crossentropy: 0.0049\n",
      "Epoch 52/200\n",
      "1/1 [==============================] - 0s 17ms/step - loss: 0.0048 - categorical_crossentropy: 0.0048\n",
      "Epoch 53/200\n",
      "1/1 [==============================] - 0s 5ms/step - loss: 0.0047 - categorical_crossentropy: 0.0047\n",
      "Epoch 54/200\n",
      "1/1 [==============================] - 0s 5ms/step - loss: 0.0046 - categorical_crossentropy: 0.0046\n",
      "Epoch 55/200\n",
      "1/1 [==============================] - 0s 13ms/step - loss: 0.0045 - categorical_crossentropy: 0.0045\n",
      "Epoch 56/200\n",
      "1/1 [==============================] - 0s 7ms/step - loss: 0.0045 - categorical_crossentropy: 0.0045\n",
      "Epoch 57/200\n",
      "1/1 [==============================] - 0s 6ms/step - loss: 0.0044 - categorical_crossentropy: 0.0044\n",
      "Epoch 58/200\n",
      "1/1 [==============================] - 0s 8ms/step - loss: 0.0043 - categorical_crossentropy: 0.0043\n",
      "Epoch 59/200\n",
      "1/1 [==============================] - 0s 28ms/step - loss: 0.0042 - categorical_crossentropy: 0.0042\n",
      "Epoch 60/200\n",
      "1/1 [==============================] - 0s 15ms/step - loss: 0.0042 - categorical_crossentropy: 0.0042\n",
      "Epoch 61/200\n",
      "1/1 [==============================] - 0s 6ms/step - loss: 0.0041 - categorical_crossentropy: 0.0041\n",
      "Epoch 62/200\n",
      "1/1 [==============================] - 0s 6ms/step - loss: 0.0040 - categorical_crossentropy: 0.0040\n",
      "Epoch 63/200\n",
      "1/1 [==============================] - 0s 7ms/step - loss: 0.0040 - categorical_crossentropy: 0.0040\n",
      "Epoch 64/200\n",
      "1/1 [==============================] - 0s 12ms/step - loss: 0.0039 - categorical_crossentropy: 0.0039\n",
      "Epoch 65/200\n",
      "1/1 [==============================] - 0s 13ms/step - loss: 0.0038 - categorical_crossentropy: 0.0038\n",
      "Epoch 66/200\n",
      "1/1 [==============================] - 0s 11ms/step - loss: 0.0038 - categorical_crossentropy: 0.0038\n",
      "Epoch 67/200\n",
      "1/1 [==============================] - 0s 5ms/step - loss: 0.0037 - categorical_crossentropy: 0.0037\n",
      "Epoch 68/200\n",
      "1/1 [==============================] - 0s 4ms/step - loss: 0.0037 - categorical_crossentropy: 0.0037\n",
      "Epoch 69/200\n",
      "1/1 [==============================] - 0s 4ms/step - loss: 0.0036 - categorical_crossentropy: 0.0036\n",
      "Epoch 70/200\n",
      "1/1 [==============================] - 0s 6ms/step - loss: 0.0035 - categorical_crossentropy: 0.0035\n",
      "Epoch 71/200\n",
      "1/1 [==============================] - 0s 6ms/step - loss: 0.0035 - categorical_crossentropy: 0.0035\n",
      "Epoch 72/200\n",
      "1/1 [==============================] - 0s 4ms/step - loss: 0.0034 - categorical_crossentropy: 0.0034\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch 73/200\n",
      "1/1 [==============================] - 0s 5ms/step - loss: 0.0034 - categorical_crossentropy: 0.0034\n",
      "Epoch 74/200\n",
      "1/1 [==============================] - 0s 10ms/step - loss: 0.0034 - categorical_crossentropy: 0.0034\n",
      "Epoch 75/200\n",
      "1/1 [==============================] - 0s 33ms/step - loss: 0.0033 - categorical_crossentropy: 0.0033\n",
      "Epoch 76/200\n",
      "1/1 [==============================] - 0s 7ms/step - loss: 0.0033 - categorical_crossentropy: 0.0033\n",
      "Epoch 77/200\n",
      "1/1 [==============================] - 0s 4ms/step - loss: 0.0032 - categorical_crossentropy: 0.0032\n",
      "Epoch 78/200\n",
      "1/1 [==============================] - 0s 5ms/step - loss: 0.0032 - categorical_crossentropy: 0.0032\n",
      "Epoch 79/200\n",
      "1/1 [==============================] - 0s 8ms/step - loss: 0.0031 - categorical_crossentropy: 0.0031\n",
      "Epoch 80/200\n",
      "1/1 [==============================] - 0s 12ms/step - loss: 0.0031 - categorical_crossentropy: 0.0031\n",
      "Epoch 81/200\n",
      "1/1 [==============================] - 0s 5ms/step - loss: 0.0031 - categorical_crossentropy: 0.0031\n",
      "Epoch 82/200\n",
      "1/1 [==============================] - 0s 6ms/step - loss: 0.0030 - categorical_crossentropy: 0.0030\n",
      "Epoch 83/200\n",
      "1/1 [==============================] - 0s 11ms/step - loss: 0.0030 - categorical_crossentropy: 0.0030\n",
      "Epoch 84/200\n",
      "1/1 [==============================] - 0s 17ms/step - loss: 0.0029 - categorical_crossentropy: 0.0029\n",
      "Epoch 85/200\n",
      "1/1 [==============================] - 0s 6ms/step - loss: 0.0029 - categorical_crossentropy: 0.0029\n",
      "Epoch 86/200\n",
      "1/1 [==============================] - 0s 12ms/step - loss: 0.0029 - categorical_crossentropy: 0.0029\n",
      "Epoch 87/200\n",
      "1/1 [==============================] - 0s 5ms/step - loss: 0.0028 - categorical_crossentropy: 0.0028\n",
      "Epoch 88/200\n",
      "1/1 [==============================] - 0s 5ms/step - loss: 0.0028 - categorical_crossentropy: 0.0028\n",
      "Epoch 89/200\n",
      "1/1 [==============================] - 0s 18ms/step - loss: 0.0028 - categorical_crossentropy: 0.0028\n",
      "Epoch 90/200\n",
      "1/1 [==============================] - 0s 19ms/step - loss: 0.0027 - categorical_crossentropy: 0.0027\n",
      "Epoch 91/200\n",
      "1/1 [==============================] - 0s 6ms/step - loss: 0.0027 - categorical_crossentropy: 0.0027\n",
      "Epoch 92/200\n",
      "1/1 [==============================] - 0s 3ms/step - loss: 0.0027 - categorical_crossentropy: 0.0027\n",
      "Epoch 93/200\n",
      "1/1 [==============================] - 0s 16ms/step - loss: 0.0027 - categorical_crossentropy: 0.0027\n",
      "Epoch 94/200\n",
      "1/1 [==============================] - 0s 16ms/step - loss: 0.0026 - categorical_crossentropy: 0.0026\n",
      "Epoch 95/200\n",
      "1/1 [==============================] - 0s 20ms/step - loss: 0.0026 - categorical_crossentropy: 0.0026\n",
      "Epoch 96/200\n",
      "1/1 [==============================] - 0s 13ms/step - loss: 0.0026 - categorical_crossentropy: 0.0026\n",
      "Epoch 97/200\n",
      "1/1 [==============================] - 0s 5ms/step - loss: 0.0025 - categorical_crossentropy: 0.0025\n",
      "Epoch 98/200\n",
      "1/1 [==============================] - 0s 4ms/step - loss: 0.0025 - categorical_crossentropy: 0.0025\n",
      "Epoch 99/200\n",
      "1/1 [==============================] - 0s 5ms/step - loss: 0.0025 - categorical_crossentropy: 0.0025\n",
      "Epoch 100/200\n",
      "1/1 [==============================] - 0s 3ms/step - loss: 0.0025 - categorical_crossentropy: 0.0025\n",
      "Epoch 101/200\n",
      "1/1 [==============================] - 0s 4ms/step - loss: 0.0024 - categorical_crossentropy: 0.0024\n",
      "Epoch 102/200\n",
      "1/1 [==============================] - 0s 5ms/step - loss: 0.0024 - categorical_crossentropy: 0.0024\n",
      "Epoch 103/200\n",
      "1/1 [==============================] - 0s 5ms/step - loss: 0.0024 - categorical_crossentropy: 0.0024\n",
      "Epoch 104/200\n",
      "1/1 [==============================] - 0s 5ms/step - loss: 0.0024 - categorical_crossentropy: 0.0024\n",
      "Epoch 105/200\n",
      "1/1 [==============================] - 0s 4ms/step - loss: 0.0023 - categorical_crossentropy: 0.0023\n",
      "Epoch 106/200\n",
      "1/1 [==============================] - 0s 3ms/step - loss: 0.0023 - categorical_crossentropy: 0.0023\n",
      "Epoch 107/200\n",
      "1/1 [==============================] - 0s 7ms/step - loss: 0.0023 - categorical_crossentropy: 0.0023\n",
      "Epoch 108/200\n",
      "1/1 [==============================] - 0s 3ms/step - loss: 0.0023 - categorical_crossentropy: 0.0023\n",
      "Epoch 109/200\n",
      "1/1 [==============================] - 0s 16ms/step - loss: 0.0023 - categorical_crossentropy: 0.0023\n",
      "Epoch 110/200\n",
      "1/1 [==============================] - 0s 4ms/step - loss: 0.0022 - categorical_crossentropy: 0.0022\n",
      "Epoch 111/200\n",
      "1/1 [==============================] - 0s 4ms/step - loss: 0.0022 - categorical_crossentropy: 0.0022\n",
      "Epoch 112/200\n",
      "1/1 [==============================] - 0s 4ms/step - loss: 0.0022 - categorical_crossentropy: 0.0022\n",
      "Epoch 113/200\n",
      "1/1 [==============================] - 0s 6ms/step - loss: 0.0022 - categorical_crossentropy: 0.0022\n",
      "Epoch 114/200\n",
      "1/1 [==============================] - 0s 5ms/step - loss: 0.0022 - categorical_crossentropy: 0.0022\n",
      "Epoch 115/200\n",
      "1/1 [==============================] - 0s 5ms/step - loss: 0.0021 - categorical_crossentropy: 0.0021\n",
      "Epoch 116/200\n",
      "1/1 [==============================] - 0s 4ms/step - loss: 0.0021 - categorical_crossentropy: 0.0021\n",
      "Epoch 117/200\n",
      "1/1 [==============================] - 0s 4ms/step - loss: 0.0021 - categorical_crossentropy: 0.0021\n",
      "Epoch 118/200\n",
      "1/1 [==============================] - 0s 4ms/step - loss: 0.0021 - categorical_crossentropy: 0.0021\n",
      "Epoch 119/200\n",
      "1/1 [==============================] - 0s 7ms/step - loss: 0.0021 - categorical_crossentropy: 0.0021\n",
      "Epoch 120/200\n",
      "1/1 [==============================] - 0s 9ms/step - loss: 0.0020 - categorical_crossentropy: 0.0020\n",
      "Epoch 121/200\n",
      "1/1 [==============================] - 0s 10ms/step - loss: 0.0020 - categorical_crossentropy: 0.0020\n",
      "Epoch 122/200\n",
      "1/1 [==============================] - 0s 5ms/step - loss: 0.0020 - categorical_crossentropy: 0.0020\n",
      "Epoch 123/200\n",
      "1/1 [==============================] - 0s 6ms/step - loss: 0.0020 - categorical_crossentropy: 0.0020\n",
      "Epoch 124/200\n",
      "1/1 [==============================] - 0s 8ms/step - loss: 0.0020 - categorical_crossentropy: 0.0020\n",
      "Epoch 125/200\n",
      "1/1 [==============================] - 0s 6ms/step - loss: 0.0020 - categorical_crossentropy: 0.0020\n",
      "Epoch 126/200\n",
      "1/1 [==============================] - 0s 4ms/step - loss: 0.0019 - categorical_crossentropy: 0.0019\n",
      "Epoch 127/200\n",
      "1/1 [==============================] - 0s 15ms/step - loss: 0.0019 - categorical_crossentropy: 0.0019\n",
      "Epoch 128/200\n",
      "1/1 [==============================] - 0s 4ms/step - loss: 0.0019 - categorical_crossentropy: 0.0019\n",
      "Epoch 129/200\n",
      "1/1 [==============================] - 0s 7ms/step - loss: 0.0019 - categorical_crossentropy: 0.0019\n",
      "Epoch 130/200\n",
      "1/1 [==============================] - 0s 13ms/step - loss: 0.0019 - categorical_crossentropy: 0.0019\n",
      "Epoch 131/200\n",
      "1/1 [==============================] - 0s 9ms/step - loss: 0.0019 - categorical_crossentropy: 0.0019\n",
      "Epoch 132/200\n",
      "1/1 [==============================] - 0s 11ms/step - loss: 0.0019 - categorical_crossentropy: 0.0019\n",
      "Epoch 133/200\n",
      "1/1 [==============================] - 0s 17ms/step - loss: 0.0018 - categorical_crossentropy: 0.0018\n",
      "Epoch 134/200\n",
      "1/1 [==============================] - 0s 14ms/step - loss: 0.0018 - categorical_crossentropy: 0.0018\n",
      "Epoch 135/200\n",
      "1/1 [==============================] - 0s 4ms/step - loss: 0.0018 - categorical_crossentropy: 0.0018\n",
      "Epoch 136/200\n",
      "1/1 [==============================] - 0s 4ms/step - loss: 0.0018 - categorical_crossentropy: 0.0018\n",
      "Epoch 137/200\n",
      "1/1 [==============================] - 0s 4ms/step - loss: 0.0018 - categorical_crossentropy: 0.0018\n",
      "Epoch 138/200\n",
      "1/1 [==============================] - 0s 5ms/step - loss: 0.0018 - categorical_crossentropy: 0.0018\n",
      "Epoch 139/200\n",
      "1/1 [==============================] - 0s 13ms/step - loss: 0.0018 - categorical_crossentropy: 0.0018\n",
      "Epoch 140/200\n",
      "1/1 [==============================] - 0s 5ms/step - loss: 0.0018 - categorical_crossentropy: 0.0018\n",
      "Epoch 141/200\n",
      "1/1 [==============================] - 0s 6ms/step - loss: 0.0017 - categorical_crossentropy: 0.0017\n",
      "Epoch 142/200\n",
      "1/1 [==============================] - 0s 11ms/step - loss: 0.0017 - categorical_crossentropy: 0.0017\n",
      "Epoch 143/200\n",
      "1/1 [==============================] - 0s 17ms/step - loss: 0.0017 - categorical_crossentropy: 0.0017\n",
      "Epoch 144/200\n",
      "1/1 [==============================] - 0s 29ms/step - loss: 0.0017 - categorical_crossentropy: 0.0017\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch 145/200\n",
      "1/1 [==============================] - 0s 5ms/step - loss: 0.0017 - categorical_crossentropy: 0.0017\n",
      "Epoch 146/200\n",
      "1/1 [==============================] - 0s 5ms/step - loss: 0.0017 - categorical_crossentropy: 0.0017\n",
      "Epoch 147/200\n",
      "1/1 [==============================] - 0s 6ms/step - loss: 0.0017 - categorical_crossentropy: 0.0017\n",
      "Epoch 148/200\n",
      "1/1 [==============================] - 0s 5ms/step - loss: 0.0017 - categorical_crossentropy: 0.0017\n",
      "Epoch 149/200\n",
      "1/1 [==============================] - 0s 5ms/step - loss: 0.0016 - categorical_crossentropy: 0.0016\n",
      "Epoch 150/200\n",
      "1/1 [==============================] - 0s 10ms/step - loss: 0.0016 - categorical_crossentropy: 0.0016\n",
      "Epoch 151/200\n",
      "1/1 [==============================] - 0s 7ms/step - loss: 0.0016 - categorical_crossentropy: 0.0016\n",
      "Epoch 152/200\n",
      "1/1 [==============================] - 0s 32ms/step - loss: 0.0016 - categorical_crossentropy: 0.0016\n",
      "Epoch 153/200\n",
      "1/1 [==============================] - 0s 23ms/step - loss: 0.0016 - categorical_crossentropy: 0.0016\n",
      "Epoch 154/200\n",
      "1/1 [==============================] - 0s 12ms/step - loss: 0.0016 - categorical_crossentropy: 0.0016\n",
      "Epoch 155/200\n",
      "1/1 [==============================] - 0s 8ms/step - loss: 0.0016 - categorical_crossentropy: 0.0016\n",
      "Epoch 156/200\n",
      "1/1 [==============================] - 0s 23ms/step - loss: 0.0016 - categorical_crossentropy: 0.0016\n",
      "Epoch 157/200\n",
      "1/1 [==============================] - 0s 8ms/step - loss: 0.0016 - categorical_crossentropy: 0.0016\n",
      "Epoch 158/200\n",
      "1/1 [==============================] - 0s 11ms/step - loss: 0.0015 - categorical_crossentropy: 0.0015\n",
      "Epoch 159/200\n",
      "1/1 [==============================] - 0s 12ms/step - loss: 0.0015 - categorical_crossentropy: 0.0015\n",
      "Epoch 160/200\n",
      "1/1 [==============================] - 0s 10ms/step - loss: 0.0015 - categorical_crossentropy: 0.0015\n",
      "Epoch 161/200\n",
      "1/1 [==============================] - 0s 6ms/step - loss: 0.0015 - categorical_crossentropy: 0.0015\n",
      "Epoch 162/200\n",
      "1/1 [==============================] - 0s 5ms/step - loss: 0.0015 - categorical_crossentropy: 0.0015\n",
      "Epoch 163/200\n",
      "1/1 [==============================] - 0s 12ms/step - loss: 0.0015 - categorical_crossentropy: 0.0015\n",
      "Epoch 164/200\n",
      "1/1 [==============================] - 0s 9ms/step - loss: 0.0015 - categorical_crossentropy: 0.0015\n",
      "Epoch 165/200\n",
      "1/1 [==============================] - 0s 9ms/step - loss: 0.0015 - categorical_crossentropy: 0.0015\n",
      "Epoch 166/200\n",
      "1/1 [==============================] - 0s 10ms/step - loss: 0.0015 - categorical_crossentropy: 0.0015\n",
      "Epoch 167/200\n",
      "1/1 [==============================] - 0s 7ms/step - loss: 0.0015 - categorical_crossentropy: 0.0015\n",
      "Epoch 168/200\n",
      "1/1 [==============================] - 0s 14ms/step - loss: 0.0015 - categorical_crossentropy: 0.0015\n",
      "Epoch 169/200\n",
      "1/1 [==============================] - 0s 10ms/step - loss: 0.0014 - categorical_crossentropy: 0.0014\n",
      "Epoch 170/200\n",
      "1/1 [==============================] - 0s 8ms/step - loss: 0.0014 - categorical_crossentropy: 0.0014\n",
      "Epoch 171/200\n",
      "1/1 [==============================] - 0s 6ms/step - loss: 0.0014 - categorical_crossentropy: 0.0014\n",
      "Epoch 172/200\n",
      "1/1 [==============================] - 0s 5ms/step - loss: 0.0014 - categorical_crossentropy: 0.0014\n",
      "Epoch 173/200\n",
      "1/1 [==============================] - 0s 6ms/step - loss: 0.0014 - categorical_crossentropy: 0.0014\n",
      "Epoch 174/200\n",
      "1/1 [==============================] - 0s 8ms/step - loss: 0.0014 - categorical_crossentropy: 0.0014\n",
      "Epoch 175/200\n",
      "1/1 [==============================] - 0s 8ms/step - loss: 0.0014 - categorical_crossentropy: 0.0014\n",
      "Epoch 176/200\n",
      "1/1 [==============================] - 0s 6ms/step - loss: 0.0014 - categorical_crossentropy: 0.0014\n",
      "Epoch 177/200\n",
      "1/1 [==============================] - 0s 13ms/step - loss: 0.0014 - categorical_crossentropy: 0.0014\n",
      "Epoch 178/200\n",
      "1/1 [==============================] - 0s 5ms/step - loss: 0.0014 - categorical_crossentropy: 0.0014\n",
      "Epoch 179/200\n",
      "1/1 [==============================] - 0s 8ms/step - loss: 0.0014 - categorical_crossentropy: 0.0014\n",
      "Epoch 180/200\n",
      "1/1 [==============================] - 0s 12ms/step - loss: 0.0014 - categorical_crossentropy: 0.0014\n",
      "Epoch 181/200\n",
      "1/1 [==============================] - 0s 13ms/step - loss: 0.0013 - categorical_crossentropy: 0.0013\n",
      "Epoch 182/200\n",
      "1/1 [==============================] - 0s 12ms/step - loss: 0.0013 - categorical_crossentropy: 0.0013\n",
      "Epoch 183/200\n",
      "1/1 [==============================] - 0s 6ms/step - loss: 0.0013 - categorical_crossentropy: 0.0013\n",
      "Epoch 184/200\n",
      "1/1 [==============================] - 0s 16ms/step - loss: 0.0013 - categorical_crossentropy: 0.0013\n",
      "Epoch 185/200\n",
      "1/1 [==============================] - 0s 16ms/step - loss: 0.0013 - categorical_crossentropy: 0.0013\n",
      "Epoch 186/200\n",
      "1/1 [==============================] - 0s 5ms/step - loss: 0.0013 - categorical_crossentropy: 0.0013\n",
      "Epoch 187/200\n",
      "1/1 [==============================] - 0s 5ms/step - loss: 0.0013 - categorical_crossentropy: 0.0013\n",
      "Epoch 188/200\n",
      "1/1 [==============================] - 0s 4ms/step - loss: 0.0013 - categorical_crossentropy: 0.0013\n",
      "Epoch 189/200\n",
      "1/1 [==============================] - 0s 5ms/step - loss: 0.0013 - categorical_crossentropy: 0.0013\n",
      "Epoch 190/200\n",
      "1/1 [==============================] - 0s 12ms/step - loss: 0.0013 - categorical_crossentropy: 0.0013\n",
      "Epoch 191/200\n",
      "1/1 [==============================] - 0s 5ms/step - loss: 0.0013 - categorical_crossentropy: 0.0013\n",
      "Epoch 192/200\n",
      "1/1 [==============================] - 0s 14ms/step - loss: 0.0013 - categorical_crossentropy: 0.0013\n",
      "Epoch 193/200\n",
      "1/1 [==============================] - 0s 7ms/step - loss: 0.0013 - categorical_crossentropy: 0.0013\n",
      "Epoch 194/200\n",
      "1/1 [==============================] - 0s 31ms/step - loss: 0.0013 - categorical_crossentropy: 0.0013\n",
      "Epoch 195/200\n",
      "1/1 [==============================] - 0s 5ms/step - loss: 0.0012 - categorical_crossentropy: 0.0012\n",
      "Epoch 196/200\n",
      "1/1 [==============================] - 0s 21ms/step - loss: 0.0012 - categorical_crossentropy: 0.0012\n",
      "Epoch 197/200\n",
      "1/1 [==============================] - 0s 14ms/step - loss: 0.0012 - categorical_crossentropy: 0.0012\n",
      "Epoch 198/200\n",
      "1/1 [==============================] - 0s 14ms/step - loss: 0.0012 - categorical_crossentropy: 0.0012\n",
      "Epoch 199/200\n",
      "1/1 [==============================] - 0s 3ms/step - loss: 0.0012 - categorical_crossentropy: 0.0012\n",
      "Epoch 200/200\n",
      "1/1 [==============================] - 0s 6ms/step - loss: 0.0012 - categorical_crossentropy: 0.0012\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "<keras.callbacks.callbacks.History at 0x64013dd30>"
      ]
     },
     "execution_count": 14,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "model.fit(X_train.reshape((1,3)), YY.reshape((1, 3)), epochs=200, batch_size=1)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "With the trained model, we can get its predicted output for an input instance. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {
    "_uuid": "1dc231803e034841a955e31c2ce2e3d68b9eced1"
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([[9.9878985e-01, 6.1903609e-04, 5.9115834e-04]], dtype=float32)"
      ]
     },
     "execution_count": 15,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "model.predict(X_pred.reshape((1, 3)))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "_uuid": "889e04a3f4b08b56ba9dbc0a5771186bd40c23ce"
   },
   "source": [
    "Now we can get the weights of the trained model. Note that by convention Keras' representation of the weights and activations are transposed compared to our example above."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {
    "_uuid": "8a0592357283bc20cd7cd47d40132b49a2cbb160"
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[array([[0.10459033, 0.9023976 ],\n",
       "        [0.33213237, 0.41678518],\n",
       "        [0.7137712 , 0.4071937 ]], dtype=float32),\n",
       " array([1.0459025, 1.0239786], dtype=float32),\n",
       " array([[0.5922062 , 0.88894045],\n",
       "        [0.49371302, 0.39044547]], dtype=float32),\n",
       " array([1.2366258, 1.2326592], dtype=float32),\n",
       " array([[ 2.1052914 , -0.31430343, -0.39098662],\n",
       "        [ 2.2433095 , -0.2361864 , -0.5071224 ]], dtype=float32),\n",
       " array([ 2.8937364 , -0.08099041,  0.18725537], dtype=float32)]"
      ]
     },
     "execution_count": 16,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "model.get_weights()"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.9"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 1
}
