{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This notebook shows how to use the simple model serving functionality found in `tfe.serving`."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Setup"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "from collections import OrderedDict\n",
    "\n",
    "import tensorflow as tf\n",
    "import tf_encrypted as tfe"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Turn on TFE profling flags since we want to inspect with TensorBoard."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "INFO:tf_encrypted:Writing trace files\n",
      "INFO:tf_encrypted:Writing event files for each run with a tag\n",
      "INFO:tf_encrypted:Writing event and trace files to '/tmp/tensorboard'\n"
     ]
    }
   ],
   "source": [
    "TENSORBOARD_DIR = \"/tmp/tensorboard\"\n",
    "\n",
    "tfe.set_tfe_trace_flag(True)\n",
    "tfe.set_tfe_events_flag(True)\n",
    "tfe.set_log_directory(TENSORBOARD_DIR)\n",
    "\n",
    "!rm -rf {TENSORBOARD_DIR}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Protocol\n",
    "\n",
    "We first configure the protocol we will be using, as well as the servers on which we want to run it.\n",
    "\n",
    "Note that the configuration is saved to file as we will be needing it in the client as well."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "players = OrderedDict([\n",
    "    ('server0', 'localhost:4000'),\n",
    "    ('server1', 'localhost:4001'),\n",
    "    ('server2', 'localhost:4002'),\n",
    "])\n",
    "\n",
    "config = tfe.RemoteConfig(players)\n",
    "config.save('/tmp/tfe.config')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "tfe.set_config(config)\n",
    "tfe.set_protocol(tfe.protocol.Pond())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Launching servers\n",
    "\n",
    "Before actually serving the computation below we need to launch TensorFlow servers in new processes. Run the following in three different terminals. You may have to allow Python to accept incoming connections."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "python -m tf_encrypted.player --config /tmp/tfe.config server0\n",
      "python -m tf_encrypted.player --config /tmp/tfe.config server1\n",
      "python -m tf_encrypted.player --config /tmp/tfe.config server2\n"
     ]
    }
   ],
   "source": [
    "for player_name in players.keys():\n",
    "    print(\"python -m tf_encrypted.player --config /tmp/tfe.config {}\".format(player_name))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Computation\n",
    "\n",
    "We then define the computation we want to run. These will happen on private tensors on the servers defined above.\n",
    "\n",
    "Note that the only single-tensor computations are currently supported."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "input_shape = (5, 5)\n",
    "output_shape = (5, 5)\n",
    "\n",
    "def computation(x):\n",
    "    return x * x"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We can then set up a new `tfe.serving.QueueServer` to serve this computation."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "server = tfe.serving.QueueServer(\n",
    "    input_shape=input_shape,\n",
    "    output_shape=output_shape,\n",
    "    computation_fn=computation)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Serving\n",
    "\n",
    "With all of the above in place we can finally connect to our servers, push our graph to them, and start serving computations."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "INFO:tf_encrypted:Starting session on target 'grpc://localhost:4000' using config graph_options {\n",
      "  optimizer_options {\n",
      "    opt_level: L0\n",
      "  }\n",
      "  rewrite_options {\n",
      "    arithmetic_optimization: OFF\n",
      "  }\n",
      "}\n",
      "\n"
     ]
    }
   ],
   "source": [
    "sess = tfe.Session(disable_optimizations=True)\n",
    "\n",
    "tf.Session.reset(sess.target)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Next\n"
     ]
    }
   ],
   "source": [
    "def step_fn():\n",
    "    print(\"Next\")\n",
    "\n",
    "server.run(\n",
    "    sess,\n",
    "    num_steps=1,\n",
    "    step_fn=step_fn)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "At this point we switch to the client notebook to run computations."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.5.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
