{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Note\n",
    "\n",
    "Please view the [README](https://github.com/eclipse/deeplearning4j-examples/blob/master/tutorials/README.md) to learn about installing, setting up dependencies, and importing notebooks in Zeppelin."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Background\n",
    "\n",
    "Why use an autoencoder? In practice, autoencoders are often applied to data denoising and dimensionality reduction. This works great for representation learning and a little less great for data compression. \n",
    "\n",
    "In deep learning, an autoencoder is a neural network that \"attempts\" to reconstruct its input. It can serve as a form of feature extraction, and autoencoders can be stacked to create \"deep\" networks (see networks such as [DeepBelief](https://deeplearning4j.org/deepbeliefnetwork.html)). Features generated by an autoencoder can be fed into other algorithms for classification, clustering, and anomaly detection.\n",
    "\n",
    "Autoencoders are also useful for data visualization when the raw input data has high dimensionality and cannot easily be plotted. By lowering the dimensionality, the output can sometimes be compressed into a 2D or 3D space for better data exploration.\n",
    "\n",
    "#### Application examples\n",
    "\n",
    "|   |   |   |\n",
    "|---|---|---|\n",
    "|**Data Denoising** | ![Application of 1D total variation denoising to a signal obtained from a single-molecule experiment. Gray is the original signal, black is the denoised signal.](https://upload.wikimedia.org/wikipedia/commons/d/d8/TVD_1D_Example.png) | [Source](https://en.wikipedia.org/wiki/Total_variation_denoising#/media/File:TVD_1D_Example.png) |\n",
    "|**Dimensionality Reduction** | ![Kernel machines are used to convert non-linearly separable functions into a higher dimension linearly separable function.](https://upload.wikimedia.org/wikipedia/commons/thumb/f/fe/Kernel_Machine.svg/512px-Kernel_Machine.svg.png) | [Source](https://en.wikipedia.org/wiki/Dimensionality_reduction#/media/File:Kernel_Machine.svg) |\n",
    "\n",
    "#### How do autoencoders work?\n",
    "\n",
    "Autoencoders are comprised of:\n",
    "\n",
    "1. Encoding function (the \"encoder\")\n",
    "2. Decoding function (the \"decoder\")\n",
    "3. Distance function (a \"loss function\")\n",
    "\n",
    "An input is fed into the autoencoder and turned into a compressed representation. The decoder then learns how to reconstruct the original input from the compressed representation, where during an unsupervised training process, the loss function helps to correct the error produced by the decoder. This process is automatic (hence \"auto\"-encoder); i.e. it does not require human intervention."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### What does this tutorial teach?\n",
    "\n",
    "Now that you know how to create different network configurations with `MultiLayerNetwork` and `ComputationGraph`, we will construct a \"stacked\" autoencoder that performs anomaly detection on MNIST digits without pretraining. The goal is to identify outlier digits; i.e. digits that are unusual and atypical. Identification of items, events or observations that \"stand out\" from the norm of a given dataset is broadly known as *anomaly detection*. Anomaly detection does not require a labeled dataset, and can be undertaken with unsupervised learning, which is helpful because most of the world's data is not labeled.\n",
    "\n",
    "This type of anomaly detection uses reconstruction error to measure how well the decoder is performing. Stereotypical examples should have low reconstruction error, whereas outliers should have high reconstruction error.\n",
    "\n",
    "#### What is anomaly detection good for?\n",
    "\n",
    "Network intrusion, fraud detection, systems monitoring, sensor network event detection (IoT), and unusual trajectory sensing are examples of anomaly detection applications."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Imports"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "autoscroll": "auto"
   },
   "outputs": [],
   "source": [
    "import org.apache.commons.lang3.tuple.ImmutablePair\n",
    "import org.apache.commons.lang3.tuple.Pair\n",
    "import org.nd4j.linalg.activations.Activation\n",
    "import org.nd4j.linalg.dataset.api.iterator.DataSetIterator\n",
    "import org.deeplearning4j.datasets.iterator.impl.MnistDataSetIterator\n",
    "import org.deeplearning4j.nn.api.OptimizationAlgorithm\n",
    "import org.deeplearning4j.nn.conf.NeuralNetConfiguration\n",
    "import org.deeplearning4j.nn.conf.Updater\n",
    "import org.deeplearning4j.nn.conf.layers.DenseLayer\n",
    "import org.deeplearning4j.nn.conf.layers.OutputLayer\n",
    "import org.deeplearning4j.nn.multilayer.MultiLayerNetwork\n",
    "import org.deeplearning4j.nn.weights.WeightInit\n",
    "import org.deeplearning4j.optimize.api.IterationListener\n",
    "import org.deeplearning4j.optimize.listeners.ScoreIterationListener\n",
    "import org.nd4j.linalg.api.ndarray.INDArray\n",
    "import org.nd4j.linalg.dataset.DataSet\n",
    "import org.nd4j.linalg.factory.Nd4j\n",
    "import org.nd4j.linalg.lossfunctions.LossFunctions\n",
    "import javax.swing._\n",
    "import java.awt._\n",
    "import java.awt.image.BufferedImage\n",
    "import java.util._\n",
    "import java.util\n",
    "\n",
    "import scala.collection.JavaConversions._"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### The stacked autoencoder\n",
    "\n",
    "The following autoencoder uses two stacked dense layers for encoding. The MNIST digits are transformed into a flat 1D array of length 784 (MNIST images are 28x28 pixels, which equals 784 when you lay them end to end).\n",
    "\n",
    "784 &rarr; 250 &rarr; 10 &rarr; 250 &rarr; 784"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "autoscroll": "auto"
   },
   "outputs": [],
   "source": [
    "val conf = new NeuralNetConfiguration.Builder()\n",
    "    .seed(12345)\n",
    "    .iterations(1)\n",
    "    .weightInit(WeightInit.XAVIER)\n",
    "    .updater(Updater.ADAGRAD)\n",
    "    .activation(Activation.RELU)\n",
    "    .optimizationAlgo(OptimizationAlgorithm.STOCHASTIC_GRADIENT_DESCENT)\n",
    "    .learningRate(0.05)\n",
    "    .regularization(true).l2(0.0001)\n",
    "    .list()\n",
    "    .layer(0, new DenseLayer.Builder().nIn(784).nOut(250)\n",
    "            .build())\n",
    "    .layer(1, new DenseLayer.Builder().nIn(250).nOut(10)\n",
    "            .build())\n",
    "    .layer(2, new DenseLayer.Builder().nIn(10).nOut(250)\n",
    "            .build())\n",
    "    .layer(3, new OutputLayer.Builder().nIn(250).nOut(784)\n",
    "            .lossFunction(LossFunctions.LossFunction.MSE)\n",
    "            .build())\n",
    "    .pretrain(false).backprop(true)\n",
    "    .build()\n",
    "\n",
    "val net = new MultiLayerNetwork(conf)\n",
    "net.setListeners(new ScoreIterationListener(1))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Using the MNIST iterator\n",
    "\n",
    "The MNIST iterator, like most of Deeplearning4j's built-in iterators, extends the `DataSetIterator` class. This API allows for simple instantiation of datasets and the automatic downloading of data in the background."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {
    "autoscroll": "auto"
   },
   "outputs": [],
   "source": [
    "//Load data and split into training and testing sets. 40000 train, 10000 test\n",
    "val iter = new MnistDataSetIterator(100,50000,false)\n",
    "\n",
    "val featuresTrain = new util.ArrayList[INDArray]\n",
    "val featuresTest = new util.ArrayList[INDArray]\n",
    "val labelsTest = new util.ArrayList[INDArray]\n",
    "\n",
    "val rand = new util.Random(12345)\n",
    "\n",
    "while(iter.hasNext()){\n",
    "    val next = iter.next()\n",
    "    val split = next.splitTestAndTrain(80, rand)  //80/20 split (from miniBatch = 100)\n",
    "    featuresTrain.add(split.getTrain().getFeatures())\n",
    "    val dsTest = split.getTest()\n",
    "    featuresTest.add(dsTest.getFeatures())\n",
    "    val indexes = Nd4j.argMax(dsTest.getLabels(),1) //Convert from one-hot representation -> index\n",
    "    labelsTest.add(indexes)\n",
    "}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    " \n",
    "\n",
    "### Unsupervised training\n",
    "\n",
    "Now that the network configruation is set up and instantiated along with our MNIST test/train iterators, training takes just a few lines of code. The fun begins.\n",
    "\n",
    "Earlier, we attached a `ScoreIterationListener` to the model by using the `setListeners()` method. Depending on the browser used to run this notebook, you can open the debugger/inspector to view listener output. This output is redirected to the console since the internals of Deeplearning4j use SL4J for logging, and the output is being redirected by Zeppelin. This helps reduce clutter in the notebooks."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {
    "autoscroll": "auto"
   },
   "outputs": [],
   "source": [
    "// the \"simple\" way to do multiple epochs is to wrap fit() in a loop\n",
    "val nEpochs = 30\n",
    "(1 to nEpochs).foreach{ epoch =>  \n",
    "    for(data <- featuresTrain){\n",
    "        net.fit(data,data);\n",
    "    }\n",
    "    println(\"Epoch \" + epoch + \" complete\");\n",
    "}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Evaluating the model\n",
    "\n",
    "Now that the autoencoder has been trained, we'll evaluate the model on the test data. Each example will be scored individually, and a map will be composed that relates each digit to a list of (score, example) pairs.\n",
    "\n",
    "Finally, we will calculate the N best and N worst scores per digit."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {
    "autoscroll": "auto"
   },
   "outputs": [],
   "source": [
    "//Evaluate the model on the test data\n",
    "//Score each example in the test set separately\n",
    "//Compose a map that relates each digit to a list of (score, example) pairs\n",
    "//Then find N best and N worst scores per digit\n",
    "val listsByDigit = new util.HashMap[Integer, List[Pair[Double, INDArray]]]\n",
    "\n",
    "(0 to 9).foreach{ i => listsByDigit.put(i, new util.ArrayList[Pair[Double, INDArray]]) }\n",
    "\n",
    "(0 to featuresTest.size-1).foreach{ i =>\n",
    "    val testData = featuresTest.get(i)\n",
    "    val labels = labelsTest.get(i)\n",
    "    \n",
    "    (0 to testData.rows-1).foreach{ j =>\n",
    "        val example = testData.getRow(j)\n",
    "        val digit = labels.getDouble(j).toInt\n",
    "        val score = net.score(new DataSet(example, example))\n",
    "        // Add (score, example) pair to the appropriate list\n",
    "        val digitAllPairs = listsByDigit.get(digit)\n",
    "        digitAllPairs.add(new ImmutablePair[Double, INDArray](score, example))\n",
    "    }\n",
    "}\n",
    "\n",
    "//Sort each list in the map by score\n",
    "val c = new Comparator[Pair[Double, INDArray]]() {\n",
    "  override def compare(o1: Pair[Double, INDArray],\n",
    "                       o2: Pair[Double, INDArray]): Int =\n",
    "    java.lang.Double.compare(o1.getLeft, o2.getLeft)\n",
    "}\n",
    "\n",
    "for (digitAllPairs <- listsByDigit.values) {\n",
    "  Collections.sort(digitAllPairs, c)\n",
    "}\n",
    "\n",
    "//After sorting, select N best and N worst scores (by reconstruction error) for each digit, where N=5\n",
    "val best = new util.ArrayList[INDArray](50)\n",
    "val worst = new util.ArrayList[INDArray](50)\n",
    "\n",
    "(0 to 9).foreach{ i => \n",
    "    val list = listsByDigit.get(i)\n",
    "    \n",
    "    (0 to 4).foreach{ j=>\n",
    "        best.add(list.get(j).getRight)\n",
    "        worst.add(list.get(list.size - j - 1).getRight)\n",
    "    }\n",
    "}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Formatting display of BufferedImage\n",
    "\n",
    "Rendering a Java `BufferedImage` in Zeppelin is a bit tricky, but by using base64 encoding and the `%html` interpreter we can write a function that makes this easy. After converting the INDArray to a base64 PNG, we can display it inline with some CSS."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {
    "autoscroll": "auto"
   },
   "outputs": [],
   "source": [
    "// function for encoding a buffered image\n",
    "def encodeArrayToImage(arr: INDArray): String = {\n",
    "    import java.util.Base64\n",
    "    import java.io.ByteArrayOutputStream\n",
    "    import javax.imageio.ImageIO\n",
    "    \n",
    "    \n",
    "    val bi = new BufferedImage(28,28,BufferedImage.TYPE_BYTE_GRAY);\n",
    "    (0 to 783).foreach{ i =>\n",
    "        bi.getRaster().setSample(i % 28, i / 28, 0, (255*arr.getDouble(i)).toInt)\n",
    "    }\n",
    "\n",
    "    val baos = new ByteArrayOutputStream()\n",
    "\n",
    "    ImageIO.write(bi, \"PNG\", baos)\n",
    "    val image = baos.toByteArray()\n",
    "    baos.close()\n",
    "\n",
    "    val encodedImage = Base64.getEncoder().encodeToString(image)\n",
    "    encodedImage\n",
    "}\n",
    "\n",
    "// convert encoded images to HTML\n",
    "def renderBufferedImages(images: Buffer[String]): Unit = {\n",
    "    val output = images.map { encodedImage => s\"\"\"<img src=\"data:image/png;base64,$encodedImage\" style=\"float:left; display:block; margin:10px\"> \"\"\" }\n",
    "    println(\"%html \"+output.mkString)\n",
    "}\n",
    "\n",
    "// loop through digits\n",
    "println(\"%html <h2>Worst Scoring Digits</h2>\")\n",
    "renderBufferedImages(worst.map(encodeArrayToImage))\n",
    "\n",
    "println(\"%html <h2>Best Scoring Digits</h2>\")\n",
    "renderBufferedImages(best.map(encodeArrayToImage))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### What's next?\n",
    "\n",
    "- Check out other DL4J tutorials available [on Github](https://github.com/eclipse/deeplearning4j-examples/tree/master/tutorials)."
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Spark 2.0.0 - Scala 2.11",
   "language": "scala",
   "name": "spark2-scala"
  },
  "language_info": {
   "codemirror_mode": "text/x-scala",
   "file_extension": ".scala",
   "mimetype": "text/x-scala",
   "name": "scala",
   "pygments_lexer": "scala",
   "version": "2.11.8"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
