{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "collected-egyptian",
   "metadata": {},
   "source": [
    "# Model Uncertainty Estimation"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bridal-updating",
   "metadata": {},
   "source": [
    "Hasktorch 0.2.0.0"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "minor-schedule",
   "metadata": {},
   "source": [
    "Wouldn't it be nice if the model told us which predictions are not reliable? Can this be done even on unseen data? The good news is yes, and even on new, completely unseen data.\n",
    "It is also simple to implement in practice.\n",
    "A canonical example is in a medical setting. By measuring model uncertainty,\n",
    "the doctor can learn how reliable is their AI-assisted patient's diagnosis.\n",
    "This allows the doctor to make a better informed decision whether to trust\n",
    "the model or not. And potentially save someone's life.\n",
    "\n",
    "Today we build upon [Day 7](https://penkovsky.com/neural-networks/day7) and we continue our journey with Hasktorch:\n",
    "\n",
    "1. We will introduce a Dropout layer.\n",
    "1. We will compute on a graphics processing unit (GPU).\n",
    "1. We will also show how to load and save models.\n",
    "1. We will train with [Adam](https://penkovsky.com/neural-networks/day2) optimizer.\n",
    "1. And finally we will talk about model uncertainty estimation."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "domestic-albuquerque",
   "metadata": {},
   "source": [
    "## Dropout Layer\n",
    "\n",
    "Neural networks, as any other model with many parameters, tend to overfit. By overfitting I mean\n",
    "\"[fail to fit to additional data or predict future observations reliably](https://en.wikipedia.org/wiki/Overfitting)\". Let us consider a classical example below.\n",
    "\n",
    "<center>\n",
    "<img src=\"https://upload.wikimedia.org/wikipedia/commons/thumb/1/19/Overfitting.svg/480px-Overfitting.svg.png\" width=\"300\" />\n",
    "</center>\n",
    "<!-- ![](https://upload.wikimedia.org/wikipedia/commons/1/19/Overfitting.svg) -->\n",
    "\n",
    "The green line is a decision boundary between created by an overfitted model.\n",
    "We see that the model tries to memorize every possible data point.\n",
    "However, it fails to generalize. To ameliorate the situation, we perform\n",
    "so-called *regularization*. That is a technique that helps to prevent overfitting.\n",
    "In the image above, the black line is a decision boundary of a regularized model.\n",
    "\n",
    "One of regularization techniques for artificial neural networks is called\n",
    "[dropout](https://www.jmlr.org/papers/volume15/srivastava14a/srivastava14a.pdf)\n",
    "or [dilution](https://en.wikipedia.org/wiki/Dilution_(neural_networks)).\n",
    "Its principle of operation is quite simple.\n",
    "During neural network training, we randomly\n",
    "disconnect a fraction of neurons with some probability.\n",
    "It turns out that dropout conditioning results in more reliable\n",
    "neural network models."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "considered-entry",
   "metadata": {},
   "source": [
    "## A Neural Network with Dropout"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "active-checkout",
   "metadata": {},
   "source": [
    "The data structures `MLP` and `MLPSpec` remain unchanged."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "dependent-campaign",
   "metadata": {},
   "outputs": [],
   "source": [
    "{-# LANGUAGE DeriveAnyClass #-}\n",
    "{-# LANGUAGE DeriveGeneric #-}\n",
    "{-# LANGUAGE FlexibleContexts #-}\n",
    "{-# LANGUAGE MultiParamTypeClasses #-}\n",
    "{-# LANGUAGE RecordWildCards #-}\n",
    "{-# LANGUAGE ScopedTypeVariables #-}\n",
    "{-# LANGUAGE TypeApplications #-} \n",
    "\n",
    "import Control.Monad ( forM_, forM, when, (<=<) )\n",
    "import Control.Monad.Cont ( ContT (..) )\n",
    "import GHC.Generics\n",
    "import Pipes hiding ( (~>) )\n",
    "import qualified Pipes.Prelude as P\n",
    "import Text.Printf ( printf\n",
    "                   , PrintfArg )\n",
    "import Torch\n",
    "import Torch.Serialize\n",
    "import Torch.Typed.Vision ( initMnist, MnistData )\n",
    "import qualified Torch.Vision as V\n",
    "import Torch.Lens ( HasTypes (..)\n",
    "                  , over \n",
    "                  , types )\n",
    "import Prelude hiding ( exp )\n",
    "\n",
    "data MLP = MLP\n",
    "  { fc1 :: Linear,\n",
    "    fc2 :: Linear,\n",
    "    fc3 :: Linear\n",
    "  }\n",
    "  deriving (Generic, Show, Parameterized)\n",
    "\n",
    "data MLPSpec = MLPSpec\n",
    "  { i :: Int,\n",
    "    h1 :: Int,\n",
    "    h2 :: Int,\n",
    "    o :: Int\n",
    "  }\n",
    "  deriving (Show, Eq)\n",
    "  \n",
    "instance Randomizable MLPSpec MLP where\n",
    "  sample MLPSpec {..} =\n",
    "    MLP\n",
    "      <$> sample (LinearSpec i h1)\n",
    "      <*> sample (LinearSpec h1 h2)\n",
    "      <*> sample (LinearSpec h2 o)\n",
    "\n",
    "(~>) :: (a -> b) -> (b -> c) -> a -> c\n",
    "f ~> g = g. f"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "amino-outreach",
   "metadata": {},
   "source": [
    "However, we will need to modify the `mlp` network to include\n",
    "a Dropout layer. If we inspect\n",
    "`dropout :: Double -> Bool -> Tensor -> IO Tensor`\n",
    "type, we see that it accepts three arguments:\n",
    "a `Double` probability of dropout,\n",
    "a `Bool` that turns this layer on or off,\n",
    "and a data `Tensor`.\n",
    "Typically, we turn the dropout on during the training\n",
    "and off during the inference stage.\n",
    "\n",
    "However, the biggest distinction between let's say `relu`\n",
    "function and `dropout` is that `relu` a *pure* function,\n",
    "i.e. it does not have any 'side-effects'.\n",
    "This means that every time when we call a pure function,\n",
    "the result will be the same.\n",
    "This is not the case with `dropout` that relies on an\n",
    "(external) random number generator, and therefore returns\n",
    "a new result each time.\n",
    "Therefore, its outcome is an `IO Tensor`.\n",
    "\n",
    "One has to pay a particular attention to those `IO` functions, \n",
    "because they can change the state in the external world.\n",
    "This can be printing text on the screen,\n",
    "deleting a file, or launching missiles.\n",
    "Typically, we prefer to keep functions pure whenever possible,\n",
    "as function purity improves the reasoning\n",
    "about the program: It is a child's play to\n",
    "refactor (reorganize) a program consisting only\n",
    "of pure functions."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "further-beijing",
   "metadata": {},
   "source": [
    "I find the so-called *do-notation* to be the most natural\n",
    "way to combine both pure functions and those with side-effects.\n",
    "The pure equations can be grouped under `let` keyword(s),\n",
    "while the side-effects are summoned with a special `<-` notation.\n",
    "This is how we integrate `dropout` in `mlp`.\n",
    "Note that now the outcome of `mlp` also becomes an `IO Tensor`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "graphic-margin",
   "metadata": {},
   "outputs": [],
   "source": [
    "mlp :: MLP -> Bool -> Tensor -> IO Tensor\n",
    "mlp MLP {..} isStochastic x0 = do\n",
    "  -- This subnetwork encapsulates the composition\n",
    "  -- of pure functions\n",
    "  let sub1 =\n",
    "          linear fc1\n",
    "          ~> relu\n",
    "\n",
    "          ~> linear fc2\n",
    "          ~> relu\n",
    "\n",
    "  -- The dropout is applied to the output\n",
    "  -- of the subnetwork\n",
    "  x1 <- dropout\n",
    "          0.1   -- Dropout probability\n",
    "          isStochastic  -- Activate Dropout when in stochastic mode\n",
    "          (sub1 x0)  -- Apply dropout to\n",
    "                     -- the output of `relu` in layer 2\n",
    "              \n",
    "  -- Another linear layer\n",
    "  let x2 = linear fc3 x1\n",
    "  \n",
    "  -- Finally, logSoftmax, which is numerically more stable\n",
    "  -- compared to simple log(softmax(x2))\n",
    "  return $ logSoftmax (Dim 1) x2"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "suspended-advocacy",
   "metadata": {},
   "source": [
    "## Computing on a GPU"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "public-voluntary",
   "metadata": {},
   "source": [
    "To transfer data onto a GPU, we use `toDevice :: ... => Device -> a -> a`.\n",
    "Below are helper methods to traverse data structures containing tensors\n",
    "(e.g. `MLP`) to convert those between devices."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "amended-impression",
   "metadata": {},
   "outputs": [],
   "source": [
    "toLocalModel :: forall a. HasTypes a Tensor => Device -> DType -> a -> a\n",
    "toLocalModel device' dtype' = over (types @Tensor @a) (toDevice device')\n",
    "\n",
    "fromLocalModel :: forall a. HasTypes a Tensor => a -> a\n",
    "fromLocalModel = over (types @Tensor @a) (toDevice (Device CPU 0))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "composite-piano",
   "metadata": {},
   "source": [
    "Below is a shortcut to transfer data to `cuda:0` device, assuming the `Float` type."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "timely-speaker",
   "metadata": {},
   "outputs": [],
   "source": [
    "toLocalModel' = toLocalModel (Device CUDA 0) Float "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "variable-nicholas",
   "metadata": {},
   "source": [
    "The train loop is almost the same as in the previous post, except a few changes.\n",
    "First, we convert training data to GPU with `toLocalModel'`\n",
    "(assuming that the model was already converted to GPU).\n",
    "Second, `predic <- mlp model isTrain input` is an `IO` action.\n",
    "Third, we manage optimizer's internal state."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "pediatric-tribune",
   "metadata": {},
   "outputs": [],
   "source": [
    "trainLoop\n",
    "  :: Optimizer o\n",
    "  => (MLP, o) -> LearningRate -> ListT IO (Tensor, Tensor) -> IO (MLP, o)\n",
    "trainLoop (model0, opt0) lr = P.foldM step begin done. enumerateData\n",
    "  where\n",
    "    isTrain = True\n",
    "    step :: Optimizer o => (MLP, o) -> ((Tensor, Tensor), Int) -> IO (MLP, o)\n",
    "    step (model, opt) args = do\n",
    "      let ((input, label), iter) = toLocalModel' args\n",
    "      predic <- mlp model isTrain input\n",
    "      let loss = nllLoss' label predic\n",
    "      -- Print loss every 100 batches\n",
    "      when (iter `mod` 100 == 0) $ do\n",
    "        putStrLn\n",
    "          $ printf \"Batch: %d | Loss: %.2f\" iter (asValue loss :: Float)\n",
    "      runStep model opt loss lr\n",
    "    done = pure\n",
    "    begin = pure (model0, opt0)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "timely-occasions",
   "metadata": {},
   "source": [
    "We slightly modify the `train` function to use Adam optimizer with `mkAdam`:\n",
    "1. 0 is the iteration number, which is then increased by the optimizer.\n",
    "2. We provide `beta1` and `beta2` values.\n",
    "3. `flattenParameters net0` are needed to get the shapes of the trained parameters momenta. See also [Day 2](https://penkovsky.com/neural-networks/day2) for more details.\n",
    "\n",
    "<!--\n",
    "We also reduced the learning rate to 1e-5 because of Adam instability.\n",
    "I guess that with batch normalization this won't be an issue anymore.\n",
    "-->"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "generic-blank",
   "metadata": {},
   "outputs": [],
   "source": [
    "train :: V.MNIST IO -> Int -> MLP -> IO MLP\n",
    "train trainMnist epochs net0 = do   \n",
    "    (net', _) <- foldLoop (net0, optimizer) epochs $ \\(net', optState) _ ->\n",
    "      runContT (streamFromMap dsetOpt trainMnist)\n",
    "      $ trainLoop (net', optState) lr. fst\n",
    "      \n",
    "    return net'\n",
    "  where\n",
    "    dsetOpt = datasetOpts workers\n",
    "    workers = 2\n",
    "    lr = 1e-4  -- Learning rate\n",
    "    optimizer = mkAdam 0 beta1 beta2 (flattenParameters net0)\n",
    "    beta1 = 0.9\n",
    "    beta2 = 0.999"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "qualified-alberta",
   "metadata": {},
   "source": [
    "Here is a function to get model accuracy:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "latin-scientist",
   "metadata": {},
   "outputs": [],
   "source": [
    "accuracy :: MLP -> ListT IO (Tensor, Tensor) -> IO Float\n",
    "accuracy net = P.foldM step begin done. enumerateData\n",
    "  where\n",
    "    step :: (Int, Int) -> ((Tensor, Tensor), Int) -> IO (Int, Int)\n",
    "    step (ac, total) args = do\n",
    "      let ((input, labels), _) = toLocalModel' args\n",
    "      -- Compute predictions\n",
    "      predic <- let stochastic = False\n",
    "                in argmax (Dim 1) RemoveDim \n",
    "                     <$> mlp net stochastic input\n",
    "    \n",
    "      let correct = asValue\n",
    "                        -- Sum those elements\n",
    "                        $ sumDim (Dim 0) RemoveDim Int64\n",
    "                        -- Find correct predictions\n",
    "                        $ predic `eq` labels\n",
    "                        \n",
    "      let batchSize = head $ shape predic\n",
    "      return (ac + correct, total + batchSize)\n",
    "      \n",
    "    -- When done folding, compute the accuracy\n",
    "    done (ac, total) = pure $ fromIntegral ac / fromIntegral total\n",
    "    \n",
    "    -- Initial errors and totals\n",
    "    begin = pure (0, 0)\n",
    "    \n",
    "testAccuracy :: V.MNIST IO -> MLP -> IO Float\n",
    "testAccuracy testStream net = do\n",
    "    runContT (streamFromMap (datasetOpts 2) testStream) $ accuracy net. fst"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "urban-retention",
   "metadata": {},
   "source": [
    "Below we provide the MLP specification: number of neurons in each layer."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "light-isaac",
   "metadata": {},
   "outputs": [],
   "source": [
    "spec = MLPSpec 784 300 50 10  "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "understanding-belize",
   "metadata": {},
   "source": [
    "## Saving and Loading the Model"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "magnetic-extreme",
   "metadata": {},
   "source": [
    "Before we can save the model, we have to make the weight tensors dependent first:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "numerical-young",
   "metadata": {},
   "outputs": [],
   "source": [
    "save' :: MLP -> FilePath -> IO ()\n",
    "save' net = save (map toDependent. flattenParameters $ net)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "historic-concern",
   "metadata": {},
   "source": [
    "The inverse is true for loading a model. We also replace\n",
    "parameters in a newly generate model with the once we\n",
    "have just loaded:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "personal-bargain",
   "metadata": {},
   "outputs": [],
   "source": [
    "load' :: FilePath -> IO MLP\n",
    "load' fpath = do\n",
    "  params <- mapM makeIndependent <=< load $ fpath\n",
    "  net0 <- sample spec\n",
    "  return $ replaceParameters net0 params"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "interpreted-primary",
   "metadata": {},
   "source": [
    "Finally, load the MNIST data:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "better-origin",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": []
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "(trainData, testData) <- initMnist \"data\""
   ]
  },
  {
   "cell_type": "markdown",
   "id": "eight-garlic",
   "metadata": {},
   "source": [
    "To train a new model (with `lr = 1e-4`, instead of `1e-5`), remove `{-` and `-}` to uncomment the lines below."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "minute-upset",
   "metadata": {},
   "outputs": [],
   "source": [
    "{-\n",
    "-- A train \"loader\"\n",
    "trainMnistStream = V.MNIST { batchSize = 256, mnistData = trainData }\n",
    "net0 <- toLocalModel' <$> sample spec\n",
    "\n",
    "epochs = 5\n",
    "net' <- train trainMnistStream epochs net0\n",
    "\n",
    "-- Saving the trained model\n",
    "save' net' \"weights.bin\"\n",
    "-}"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "funded-anime",
   "metadata": {},
   "source": [
    "Load a pretrained model:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "id": "adjacent-alias",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": []
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "net <- load' \"weights.bin\""
   ]
  },
  {
   "cell_type": "markdown",
   "id": "upper-patch",
   "metadata": {},
   "source": [
    "Verify the model's accuracy:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "downtown-simpson",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "Accuracy 0.9245"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "-- A test \"loader\"\n",
    "testMnistStream = V.MNIST { batchSize = 1000, mnistData = testData }\n",
    "\n",
    "ac <- testAccuracy testMnistStream net\n",
    "putStrLn $ \"Accuracy \" ++ show ac"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "hidden-berkeley",
   "metadata": {},
   "source": [
    "The accuracy is not tremendous, but it can be improved by introducing [batch norm](https://penkovsky.com/neural-networks/day4), [convolutional layers](https://penkovsky.com/neural-networks/day5), and training longer. Now, we are about to discuss model uncertainty estimation and this accuracy is good enough."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "eligible-slope",
   "metadata": {},
   "source": [
    "## Predictive Entropy"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "short-calgary",
   "metadata": {},
   "source": [
    "Model uncertainties are calculated as:\n",
    "\n",
    "$$\\mathbb{H}(y|\\mathbf{x}) = -\\sum_c p(y = c|\\mathbf{x}) \\log p(y = c|\\mathbf{x}),$$\n",
    "\n",
    "where $y$ is label, $\\mathbf{x}$ – input image, $c$ – class, $p$ – probability.\n",
    "\n",
    "We call $\\mathbb{H}$ [predictive entropy](https://towardsdatascience.com/2-easy-ways-to-measure-your-image-classification-models-uncertainty-1c489fefaec8). And it is the very dropout\n",
    "technique that helps us to estimate those uncertainties.\n",
    "All we need to do is to collect several predictions in the stochastic mode\n",
    "(i.e. dropout enabled)\n",
    "and apply the formula from above."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "id": "certified-campaign",
   "metadata": {},
   "outputs": [],
   "source": [
    "predictiveEntropy :: Tensor -> Float\n",
    "predictiveEntropy predictions =\n",
    "  let epsilon = 1e-45\n",
    "      a = meanDim (Dim 0) RemoveDim Float predictions\n",
    "      b = Torch.log $ a + epsilon\n",
    "  in asValue $ negate $ sumAll $ a * b"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "sexual-default",
   "metadata": {},
   "source": [
    "## Visualizing Softmax Predictions"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "promotional-palmer",
   "metadata": {},
   "source": [
    "To get a better feeling what model outputs look like,\n",
    "it would be nice to visualize the softmax output\n",
    "as a histogram or a bar chart."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "id": "freelance-photograph",
   "metadata": {},
   "outputs": [],
   "source": [
    "-- Barchart inspired by https://github.com/morishin/ascii-horizontal-barchart/blob/master/src/chart.js\n",
    "bar :: Floating a => RealFrac a => PrintfArg a => [String] -> [a] -> IO ()\n",
    "bar lab xs = forM_ ys putStrLn\n",
    "  where\n",
    "    ys = let lab' = map (appendSpaces maxLen. Prelude.take maxLabelLen) lab\n",
    "         in zipWith3 (printf \"%s %s %.2f\") lab' (showBar xs) xs\n",
    "    appendSpaces maxN s = let l = length s\n",
    "                          in s ++ replicate (maxN - l) ' '\n",
    "    maxLen = Prelude.min maxLabelLen $ _findmax. map length $ lab\n",
    "    maxLabelLen = 15\n",
    "    \n",
    "showBar :: Floating a => RealFrac a => [a] -> [String]\n",
    "showBar xs =\n",
    "  let maxVal = _findmax xs\n",
    "      maxBarLen = 50\n",
    "  in map (drawBar maxBarLen maxVal) xs\n",
    "\n",
    "-- | Formats a bar string\n",
    "--\n",
    "-- >>> drawBar 100 1 100\n",
    "-- \"▉\"\n",
    "-- >>> drawBar 100 1.5 100\n",
    "-- \"▉▋\"\n",
    "-- >>> drawBar 100 2 100\n",
    "-- \"▉▉\"\n",
    "drawBar :: Floating a => RealFrac a => a -> a -> a -> String\n",
    "drawBar maxBarLen maxValue value = bar1\n",
    "  where \n",
    "    barLength = value * maxBarLen / maxValue\n",
    "    wholeNumberPart = Prelude.floor barLength\n",
    "    fractionalPart = barLength - fromIntegral wholeNumberPart\n",
    "    \n",
    "    bar0 = replicate wholeNumberPart $ _frac _maxFrac\n",
    "    bar1 = if fractionalPart > 0\n",
    "      then bar0 ++ [_frac $ Prelude.floor $ fractionalPart * (_maxFrac + 1)]\n",
    "      else bar0 ++ \"\"\n",
    "      \n",
    "    _frac 0 = '▏'\n",
    "    _frac 1 = '▎'\n",
    "    _frac 2 = '▍'\n",
    "    _frac 3 = '▋'\n",
    "    _frac 4 = '▊'\n",
    "    _frac _ = '▉'\n",
    "\n",
    "    _maxFrac = 5\n",
    "      \n",
    "_findmax = foldr1 (\\x y -> if x >= y then x else y)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "distinct-replacement",
   "metadata": {},
   "source": [
    "For instance"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "id": "exclusive-baptist",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "apples  ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 50.00\n",
       "oranges ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 100.00\n",
       "kiwis   ▉▉▉▉▉▉▉▉▉▉▉▉▋ 25.00"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "bar [\"apples\", \"oranges\", \"kiwis\"] [50, 100, 25]"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "lesser-sphere",
   "metadata": {},
   "source": [
    "Now we would like to display an image, the predictive entropy, the softmax output\n",
    "look like, followed by prediction and ground truth.\n",
    "To transform logSoftmax into softmax, we use the following identity:\n",
    "\n",
    "$$e^{\\ln(\\rm{softmax}(x))} = \\rm{softmax}(x),$$\n",
    "\n",
    "that is `softmax = exp. logSoftmax`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "id": "alike-watershed",
   "metadata": {},
   "outputs": [],
   "source": [
    "displayImage :: MLP -> (Tensor, Tensor) -> IO ()\n",
    "displayImage model (testImg, testLabel) = do\n",
    "  let repeatN = 20\n",
    "      stochastic = True\n",
    "  preds <- forM [1..repeatN] $ \\_ -> exp  -- logSoftmax -> softmax\n",
    "                                     <$> mlp model stochastic testImg\n",
    "  pred0 <- mlp model (not stochastic) testImg\n",
    "  let entropy = predictiveEntropy $ Torch.cat (Dim 0) preds\n",
    "  -- Select only images with high entropy\n",
    "  when (entropy > 0.9) $ do\n",
    "      V.dispImage testImg\n",
    "      putStr \"Entropy \"\n",
    "      print entropy\n",
    "      -- exp. logSoftmax = softmax\n",
    "      bar (map show [0..9]) (asValue $ flattenAll $ exp pred0 :: [Float])\n",
    "      putStrLn $ \"Model        : \" ++ (show. argmax (Dim 1) RemoveDim. exp $ pred0)\n",
    "      putStrLn $ \"Ground Truth : \" ++ show testLabel"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "valid-jesus",
   "metadata": {},
   "source": [
    "Show only those images the model is uncertain about, entropy > 0.9."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "id": "loaded-conservation",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "              \n",
       "              \n",
       "     +%       \n",
       "     %        \n",
       "     *        \n",
       "    #-  +%%=  \n",
       "    %  %%  %  \n",
       "    % %+   #  \n",
       "    % %    *  \n",
       "    %  % :%   \n",
       "    #*:=%#    \n",
       "     -%=.     \n",
       "              \n",
       "              \n",
       "Entropy 1.044228\n",
       "0 ▉▏ 0.01\n",
       "1 ▏ 0.00\n",
       "2 ▋ 0.01\n",
       "3 ▏ 0.00\n",
       "4 ▉ 0.01\n",
       "5 ▍ 0.00\n",
       "6 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 0.70\n",
       "7 ▏ 0.00\n",
       "8 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▎ 0.21\n",
       "9 ▉▉▉▋ 0.05\n",
       "Model        : Tensor Int64 [1] [ 6]\n",
       "Ground Truth : Tensor Int64 [1] [ 6]\n",
       "              \n",
       "              \n",
       "      .#%#.   \n",
       "    %%+:      \n",
       "     %        \n",
       "     %..      \n",
       "    ##-#%.    \n",
       "         -%   \n",
       "          :%  \n",
       "           +  \n",
       "    -     .%  \n",
       "    @%+*%%+   \n",
       "              \n",
       "              \n",
       "Entropy 1.2909155\n",
       "0 ▏ 0.00\n",
       "1 ▏ 0.00\n",
       "2 ▍ 0.00\n",
       "3 ▉▉▉▉▉▉▉▉ 0.07\n",
       "4 ▏ 0.00\n",
       "5 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▍ 0.44\n",
       "6 ▏ 0.00\n",
       "7 ▍ 0.00\n",
       "8 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 0.47\n",
       "9 ▉▏ 0.01\n",
       "Model        : Tensor Int64 [1] [ 8]\n",
       "Ground Truth : Tensor Int64 [1] [ 5]\n",
       "              \n",
       "              \n",
       "              \n",
       "     =-     = \n",
       "     #-    =# \n",
       "     %-    #  \n",
       "    +%     %  \n",
       "    %.    .%  \n",
       "   ##     .*  \n",
       "   %%%%%#%#.  \n",
       "   .      %   \n",
       "              \n",
       "              \n",
       "              \n",
       "Entropy 1.3325933\n",
       "0 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▎ 0.19\n",
       "1 ▏ 0.00\n",
       "2 ▏ 0.00\n",
       "3 ▏ 0.00\n",
       "4 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 0.46\n",
       "5 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▊ 0.18\n",
       "6 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▊ 0.16\n",
       "7 ▏ 0.00\n",
       "8 ▏ 0.00\n",
       "9 ▏ 0.00\n",
       "Model        : Tensor Int64 [1] [ 4]\n",
       "Ground Truth : Tensor Int64 [1] [ 4]\n",
       "              \n",
       "              \n",
       "       *:     \n",
       "     :%%*     \n",
       "    #- -+     \n",
       "       -      \n",
       "       #      \n",
       "      +:      \n",
       "      #    =. \n",
       "     #.  =%:  \n",
       "     *.*%-    \n",
       "    #%%:      \n",
       "              \n",
       "              \n",
       "Entropy 1.2533671\n",
       "0 ▉ 0.01\n",
       "1 ▉▉▍ 0.03\n",
       "2 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▏ 0.38\n",
       "3 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 0.54\n",
       "4 ▏ 0.00\n",
       "5 ▋ 0.01\n",
       "6 ▏ 0.00\n",
       "7 ▏ 0.00\n",
       "8 ▉▉▋ 0.03\n",
       "9 ▏ 0.00\n",
       "Model        : Tensor Int64 [1] [ 3]\n",
       "Ground Truth : Tensor Int64 [1] [ 2]\n",
       "              \n",
       "              \n",
       "              \n",
       "     +##-     \n",
       "     *   :    \n",
       "     =        \n",
       "     %  =     \n",
       "     %  %     \n",
       "     -= @     \n",
       "      = %     \n",
       "        %     \n",
       "        %     \n",
       "        %     \n",
       "              \n",
       "Entropy 0.9308149\n",
       "0 ▏ 0.00\n",
       "1 ▏ 0.00\n",
       "2 ▏ 0.00\n",
       "3 ▉ 0.01\n",
       "4 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▏ 0.29\n",
       "5 ▍ 0.00\n",
       "6 ▏ 0.00\n",
       "7 ▎ 0.00\n",
       "8 ▉▎ 0.02\n",
       "9 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 0.67\n",
       "Model        : Tensor Int64 [1] [ 9]\n",
       "Ground Truth : Tensor Int64 [1] [ 9]\n",
       "              \n",
       "              \n",
       "              \n",
       "        #     \n",
       "      % #     \n",
       "      % *     \n",
       "      % =     \n",
       "     %%@%     \n",
       "     *  %     \n",
       "        %     \n",
       "        %     \n",
       "        %     \n",
       "        =     \n",
       "              \n",
       "Entropy 1.39582\n",
       "0 ▏ 0.00\n",
       "1 ▉▍ 0.01\n",
       "2 ▏ 0.00\n",
       "3 ▉▉▉▉▉▊ 0.06\n",
       "4 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 0.48\n",
       "5 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▋ 0.17\n",
       "6 ▉▉▉▉ 0.04\n",
       "7 ▏ 0.00\n",
       "8 ▉▋ 0.02\n",
       "9 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▏ 0.22\n",
       "Model        : Tensor Int64 [1] [ 4]\n",
       "Ground Truth : Tensor Int64 [1] [ 4]\n",
       "              \n",
       "              \n",
       "              \n",
       "      .#%@    \n",
       "      %%%%=   \n",
       "     +%. %#   \n",
       "      %%%%:   \n",
       "       %%%    \n",
       "      -%%     \n",
       "     -%%      \n",
       "    .%%       \n",
       "    %%-       \n",
       "    %*        \n",
       "              \n",
       "Entropy 1.0009595\n",
       "0 ▏ 0.00\n",
       "1 ▏ 0.00\n",
       "2 ▏ 0.00\n",
       "3 ▉▊ 0.02\n",
       "4 ▏ 0.00\n",
       "5 ▎ 0.00\n",
       "6 ▏ 0.00\n",
       "7 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▏ 0.35\n",
       "8 ▉ 0.01\n",
       "9 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 0.62\n",
       "Model        : Tensor Int64 [1] [ 9]\n",
       "Ground Truth : Tensor Int64 [1] [ 9]\n",
       "              \n",
       "              \n",
       "              \n",
       "              \n",
       "     %##%     \n",
       "    :%+%%.    \n",
       "    -%  %:    \n",
       "    -%  %+    \n",
       "     +  %+    \n",
       "        %+    \n",
       "        %+    \n",
       "        %#    \n",
       "        %%    \n",
       "        .+    \n",
       "Entropy 1.0057298\n",
       "0 ▏ 0.00\n",
       "1 ▏ 0.00\n",
       "2 ▏ 0.00\n",
       "3 ▏ 0.00\n",
       "4 ▏ 0.00\n",
       "5 ▉▉▍ 0.03\n",
       "6 ▏ 0.00\n",
       "7 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▎ 0.33\n",
       "8 ▏ 0.00\n",
       "9 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 0.63\n",
       "Model        : Tensor Int64 [1] [ 9]\n",
       "Ground Truth : Tensor Int64 [1] [ 7]\n",
       "              \n",
       "              \n",
       "              \n",
       "   %%%%%      \n",
       "      .%      \n",
       "      %.      \n",
       "    =%%%+     \n",
       "    %   %# -  \n",
       "         %%.  \n",
       "        *%-   \n",
       "       %:%    \n",
       "      %-%=    \n",
       "      %%-     \n",
       "              \n",
       "Entropy 1.0500848\n",
       "0 ▉▉▉▉▍ 0.07\n",
       "1 ▎ 0.00\n",
       "2 ▎ 0.00\n",
       "3 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 0.79\n",
       "4 ▉▉▊ 0.04\n",
       "5 ▉▉▉▎ 0.05\n",
       "6 ▏ 0.00\n",
       "7 ▍ 0.01\n",
       "8 ▎ 0.00\n",
       "9 ▉▊ 0.03\n",
       "Model        : Tensor Int64 [1] [ 3]\n",
       "Ground Truth : Tensor Int64 [1] [ 3]\n",
       "              \n",
       "              \n",
       "              \n",
       "     :*       \n",
       "      %       \n",
       "      %%      \n",
       "      :%      \n",
       "       %*     \n",
       "       +*     \n",
       "        %     \n",
       "        %     \n",
       "        %     \n",
       "        =     \n",
       "              \n",
       "Entropy 1.590256\n",
       "0 ▏ 0.00\n",
       "1 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 0.36\n",
       "2 ▏ 0.00\n",
       "3 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 0.10\n",
       "4 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▏ 0.32\n",
       "5 ▉▉▉▎ 0.02\n",
       "6 ▏ 0.00\n",
       "7 ▎ 0.00\n",
       "8 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▏ 0.12\n",
       "9 ▉▉▉▉▉▉▉▉▉▉▍ 0.07\n",
       "Model        : Tensor Int64 [1] [ 1]\n",
       "Ground Truth : Tensor Int64 [1] [ 1]\n",
       "              \n",
       "              \n",
       "              \n",
       "    =   =     \n",
       "    %%%%%.    \n",
       "      :%%     \n",
       "       %*     \n",
       "    .%%%%%%%%+\n",
       "      %%%*:   \n",
       "      %%      \n",
       "      %%      \n",
       "      %%      \n",
       "      %%      \n",
       "              \n",
       "Entropy 0.9592192\n",
       "0 ▏ 0.00\n",
       "1 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▊ 0.28\n",
       "2 ▋ 0.01\n",
       "3 ▍ 0.00\n",
       "4 ▏ 0.00\n",
       "5 ▏ 0.00\n",
       "6 ▍ 0.01\n",
       "7 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 0.67\n",
       "8 ▏ 0.00\n",
       "9 ▉▉▏ 0.03\n",
       "Model        : Tensor Int64 [1] [ 7]\n",
       "Ground Truth : Tensor Int64 [1] [ 7]\n",
       "              \n",
       "              \n",
       "              \n",
       "      =%#*    \n",
       "    :%%- .#   \n",
       "    %%   :%   \n",
       "   .%    #=   \n",
       "         %    \n",
       "       %%#    \n",
       "     -%%%%    \n",
       "     %%%.%    \n",
       "     #%  *+   \n",
       "          :   \n",
       "              \n",
       "Entropy 1.0005924\n",
       "0 ▍ 0.00\n",
       "1 ▏ 0.00\n",
       "2 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 0.48\n",
       "3 ▏ 0.00\n",
       "4 ▏ 0.00\n",
       "5 ▏ 0.00\n",
       "6 ▏ 0.00\n",
       "7 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▎ 0.47\n",
       "8 ▉▉▉▋ 0.03\n",
       "9 ▉▎ 0.01\n",
       "Model        : Tensor Int64 [1] [ 2]\n",
       "Ground Truth : Tensor Int64 [1] [ 2]\n",
       "              \n",
       "              \n",
       "      -       \n",
       "    :%%%-     \n",
       "   :%   %     \n",
       "   +:   :%-   \n",
       "  -%     *%   \n",
       "  *:      %*  \n",
       "  ==      *%  \n",
       "   *      :%  \n",
       "   #::..:*%%  \n",
       "    :%*%%-:   \n",
       "              \n",
       "              \n",
       "Entropy 1.3647958\n",
       "0 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 0.50\n",
       "1 ▏ 0.00\n",
       "2 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▏ 0.23\n",
       "3 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 0.23\n",
       "4 ▏ 0.00\n",
       "5 ▉▉▉▏ 0.03\n",
       "6 ▏ 0.00\n",
       "7 ▏ 0.00\n",
       "8 ▏ 0.00\n",
       "9 ▉▍ 0.01\n",
       "Model        : Tensor Int64 [1] [ 0]\n",
       "Ground Truth : Tensor Int64 [1] [ 0]\n",
       "              \n",
       "              \n",
       "              \n",
       "      %-      \n",
       "       :%     \n",
       "        #     \n",
       "    -%#%*     \n",
       "   ::  @%.    \n",
       "   *  %  #.   \n",
       "    %%    %   \n",
       "           %  \n",
       "            % \n",
       "              \n",
       "              \n",
       "Entropy 1.1518966\n",
       "0 ▉▉▉▎ 0.06\n",
       "1 ▍ 0.01\n",
       "2 ▊ 0.01\n",
       "3 ▏ 0.00\n",
       "4 ▉▉▊ 0.05\n",
       "5 ▏ 0.00\n",
       "6 ▏ 0.00\n",
       "7 ▏ 0.00\n",
       "8 ▍ 0.01\n",
       "9 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 0.86\n",
       "Model        : Tensor Int64 [1] [ 9]\n",
       "Ground Truth : Tensor Int64 [1] [ 2]\n",
       "              \n",
       "              \n",
       "              \n",
       "    =%%%%+    \n",
       "   .#. =#%    \n",
       "   %*   %#    \n",
       "   #.   .%    \n",
       "   .#   *%:   \n",
       "    .%%%- =   \n",
       "           #  \n",
       "           #  \n",
       "      -%% =%  \n",
       "       =%%#   \n",
       "              \n",
       "Entropy 1.1256037\n",
       "0 ▉▊ 0.02\n",
       "1 ▏ 0.00\n",
       "2 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▏ 0.29\n",
       "3 ▎ 0.00\n",
       "4 ▏ 0.00\n",
       "5 ▏ 0.00\n",
       "6 ▏ 0.00\n",
       "7 ▏ 0.00\n",
       "8 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 0.59\n",
       "9 ▉▉▉▉▉▉▉▉▎ 0.10\n",
       "Model        : Tensor Int64 [1] [ 8]\n",
       "Ground Truth : Tensor Int64 [1] [ 9]\n",
       "              \n",
       "              \n",
       "      --%:    \n",
       "     .   %    \n",
       "         %:   \n",
       "     ** .%    \n",
       "      *%%.    \n",
       "      %%*%    \n",
       "     %*  %    \n",
       "     %   %    \n",
       "     %  %:    \n",
       "     %%%:     \n",
       "              \n",
       "              \n",
       "Entropy 1.0862491\n",
       "0 ▏ 0.00\n",
       "1 ▉▉▋ 0.03\n",
       "2 ▉▉▉▉▉ 0.05\n",
       "3 ▏ 0.00\n",
       "4 ▏ 0.00\n",
       "5 ▋ 0.01\n",
       "6 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▎ 0.42\n",
       "7 ▏ 0.00\n",
       "8 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 0.50\n",
       "9 ▏ 0.00\n",
       "Model        : Tensor Int64 [1] [ 8]\n",
       "Ground Truth : Tensor Int64 [1] [ 8]\n",
       "              \n",
       "              \n",
       "              \n",
       "        %%    \n",
       "        %%    \n",
       "       *%#    \n",
       "      :%%-    \n",
       "      .%%     \n",
       "      %%+     \n",
       "     +%%      \n",
       "     *%+      \n",
       "     =%=      \n",
       "      =:      \n",
       "              \n",
       "Entropy 1.0085171\n",
       "0 ▏ 0.00\n",
       "1 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 0.81\n",
       "2 ▎ 0.00\n",
       "3 ▍ 0.01\n",
       "4 ▎ 0.00\n",
       "5 ▏ 0.00\n",
       "6 ▏ 0.00\n",
       "7 ▉▉▉▉▉▉▉▉▉▉▏ 0.16\n",
       "8 ▎ 0.01\n",
       "9 ▍ 0.01\n",
       "Model        : Tensor Int64 [1] [ 1]\n",
       "Ground Truth : Tensor Int64 [1] [ 1]\n",
       "              \n",
       "              \n",
       "              \n",
       "    -@@:      \n",
       "   -#  +:     \n",
       "   #-   %     \n",
       "    %: ..-    \n",
       "     +%=*%    \n",
       "       .%%    \n",
       "        %*    \n",
       "        %%    \n",
       "        %%    \n",
       "        %.    \n",
       "              \n",
       "Entropy 1.5438546\n",
       "0 ▏ 0.00\n",
       "1 ▏ 0.00\n",
       "2 ▉▉▉▉ 0.03\n",
       "3 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▎ 0.14\n",
       "4 ▉▉▉▉▉▊ 0.05\n",
       "5 ▊ 0.01\n",
       "6 ▏ 0.00\n",
       "7 ▉▉▉▉▊ 0.04\n",
       "8 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▎ 0.31\n",
       "9 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 0.42\n",
       "Model        : Tensor Int64 [1] [ 9]\n",
       "Ground Truth : Tensor Int64 [1] [ 9]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "testMnistStream = V.MNIST {batchSize = 1, mnistData = testData}\n",
    "forM_ [0 .. 200] $ displayImage (fromLocalModel net) <=< getItem testMnistStream"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "established-white",
   "metadata": {},
   "source": [
    "Reflecting on softmax outputs above we can state that\n",
    "\n",
    "1. Softmax output alone is not enough to estimate the model uncertainty. We can observe wrong predictions even when the margin between the top and second-best guess is large.\n",
    "2. Sometimes prediction and ground truth coincide. So why the entropy is high? We actually need to inspect such cases in more details.\n",
    "\n",
    "To illustrate the last point, let us take a closer look at a case with high entropy. By running several realizations of the stochatic model, we can verify if the model has any \"doubt\" by selecting different answers."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "id": "japanese-support",
   "metadata": {},
   "outputs": [],
   "source": [
    "displayImage' :: MLP -> (Tensor, Tensor) -> IO ()\n",
    "displayImage' model (testImg, testLabel) = do\n",
    "  let repeatN = 10\n",
    "  -- pred <- mlp model False testImg\n",
    "  pred' <- forM [1..repeatN] $ \\_ -> exp  -- logSoftmax -> softMax\n",
    "                                     <$> mlp model True testImg\n",
    "  pred0 <- mlp model False testImg\n",
    "  let entropy = predictiveEntropy $ Torch.cat (Dim 0) pred'\n",
    "\n",
    "  V.dispImage testImg\n",
    "  putStr \"Entropy \"\n",
    "  print entropy\n",
    "  forM_ pred' ( \\pred ->\n",
    "      putStrLn \"\" \n",
    "      >> bar (map show [0..9]) (asValue $ flattenAll pred :: [Float]) )\n",
    "  putStrLn $ \"Model        : \" ++ (show. argmax (Dim 1) RemoveDim. exp $ pred0)\n",
    "  putStrLn $ \"Ground Truth : \" ++ show testLabel"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "id": "optimum-nigeria",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "              \n",
       "              \n",
       "     +%       \n",
       "     %        \n",
       "     *        \n",
       "    #-  +%%=  \n",
       "    %  %%  %  \n",
       "    % %+   #  \n",
       "    % %    *  \n",
       "    %  % :%   \n",
       "    #*:=%#    \n",
       "     -%=.     \n",
       "              \n",
       "              \n",
       "Entropy 1.1085687\n",
       "\n",
       "0 ▎ 0.00\n",
       "1 ▏ 0.00\n",
       "2 ▏ 0.00\n",
       "3 ▏ 0.00\n",
       "4 ▏ 0.00\n",
       "5 ▏ 0.00\n",
       "6 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 0.90\n",
       "7 ▏ 0.00\n",
       "8 ▉▉▉▉▉▍ 0.10\n",
       "9 ▏ 0.00\n",
       "\n",
       "0 ▋ 0.01\n",
       "1 ▏ 0.00\n",
       "2 ▎ 0.00\n",
       "3 ▏ 0.00\n",
       "4 ▋ 0.01\n",
       "5 ▎ 0.00\n",
       "6 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 0.74\n",
       "7 ▏ 0.00\n",
       "8 ▉▉▉▉▉▉▉▉▉▉▉▉▉▍ 0.20\n",
       "9 ▉▉▋ 0.04\n",
       "\n",
       "0 ▋ 0.01\n",
       "1 ▏ 0.00\n",
       "2 ▏ 0.00\n",
       "3 ▎ 0.01\n",
       "4 ▉▉▉▏ 0.05\n",
       "5 ▏ 0.00\n",
       "6 ▉▉▎ 0.04\n",
       "7 ▏ 0.00\n",
       "8 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 0.86\n",
       "9 ▉▎ 0.02\n",
       "\n",
       "0 ▋ 0.01\n",
       "1 ▏ 0.00\n",
       "2 ▎ 0.00\n",
       "3 ▏ 0.00\n",
       "4 ▋ 0.01\n",
       "5 ▎ 0.00\n",
       "6 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 0.74\n",
       "7 ▏ 0.00\n",
       "8 ▉▉▉▉▉▉▉▉▉▉▉▉▉▍ 0.20\n",
       "9 ▉▉▋ 0.04\n",
       "\n",
       "0 ▉▉▉▉▍ 0.04\n",
       "1 ▏ 0.00\n",
       "2 ▎ 0.00\n",
       "3 ▏ 0.00\n",
       "4 ▉▉▉▉▉▉▉▉▉▉▏ 0.09\n",
       "5 ▉▏ 0.01\n",
       "6 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▋ 0.30\n",
       "7 ▏ 0.00\n",
       "8 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▍ 0.12\n",
       "9 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 0.43\n",
       "\n",
       "0 ▋ 0.01\n",
       "1 ▏ 0.00\n",
       "2 ▎ 0.00\n",
       "3 ▏ 0.00\n",
       "4 ▋ 0.01\n",
       "5 ▎ 0.00\n",
       "6 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 0.74\n",
       "7 ▏ 0.00\n",
       "8 ▉▉▉▉▉▉▉▉▉▉▉▉▉▍ 0.20\n",
       "9 ▉▉▋ 0.04\n",
       "\n",
       "0 ▋ 0.01\n",
       "1 ▏ 0.00\n",
       "2 ▎ 0.00\n",
       "3 ▏ 0.00\n",
       "4 ▋ 0.01\n",
       "5 ▎ 0.00\n",
       "6 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 0.74\n",
       "7 ▏ 0.00\n",
       "8 ▉▉▉▉▉▉▉▉▉▉▉▉▉▍ 0.20\n",
       "9 ▉▉▋ 0.04\n",
       "\n",
       "0 ▋ 0.01\n",
       "1 ▏ 0.00\n",
       "2 ▎ 0.00\n",
       "3 ▏ 0.00\n",
       "4 ▋ 0.01\n",
       "5 ▎ 0.00\n",
       "6 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 0.74\n",
       "7 ▏ 0.00\n",
       "8 ▉▉▉▉▉▉▉▉▉▉▉▉▉▍ 0.20\n",
       "9 ▉▉▋ 0.04\n",
       "\n",
       "0 ▉▏ 0.02\n",
       "1 ▏ 0.00\n",
       "2 ▎ 0.00\n",
       "3 ▏ 0.00\n",
       "4 ▋ 0.01\n",
       "5 ▏ 0.00\n",
       "6 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 0.80\n",
       "7 ▏ 0.00\n",
       "8 ▉▉▉▉▉▉▋ 0.10\n",
       "9 ▉▉▉▉▎ 0.07\n",
       "\n",
       "0 ▉▉▉▉▍ 0.04\n",
       "1 ▏ 0.00\n",
       "2 ▎ 0.00\n",
       "3 ▏ 0.00\n",
       "4 ▉▉▉▉▉▉▉▉▉▉▏ 0.09\n",
       "5 ▉▏ 0.01\n",
       "6 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▋ 0.30\n",
       "7 ▏ 0.00\n",
       "8 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▍ 0.12\n",
       "9 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 0.43\n",
       "Model        : Tensor Int64 [1] [ 6]\n",
       "Ground Truth : Tensor Int64 [1] [ 6]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "(displayImage' (fromLocalModel net) <=< getItem testMnistStream) 11"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "beneficial-navigator",
   "metadata": {},
   "source": [
    "Wow! The model sometimes \"sees\" digit 6, sometimes digit 8, and sometimes digit 9!\n",
    "For the contrast, here is how predictions with low entropy typically look like."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "id": "selected-fishing",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "              \n",
       "              \n",
       "              \n",
       "              \n",
       "   #%%*****   \n",
       "      ::: %   \n",
       "         %:   \n",
       "        :%    \n",
       "        #:    \n",
       "       :%     \n",
       "       %.     \n",
       "      #=      \n",
       "     :%.      \n",
       "     =#       \n",
       "Entropy 4.8037423e-4\n",
       "\n",
       "0 ▏ 0.00\n",
       "1 ▏ 0.00\n",
       "2 ▏ 0.00\n",
       "3 ▏ 0.00\n",
       "4 ▏ 0.00\n",
       "5 ▏ 0.00\n",
       "6 ▏ 0.00\n",
       "7 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 1.00\n",
       "8 ▏ 0.00\n",
       "9 ▏ 0.00\n",
       "\n",
       "0 ▏ 0.00\n",
       "1 ▏ 0.00\n",
       "2 ▏ 0.00\n",
       "3 ▏ 0.00\n",
       "4 ▏ 0.00\n",
       "5 ▏ 0.00\n",
       "6 ▏ 0.00\n",
       "7 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 1.00\n",
       "8 ▏ 0.00\n",
       "9 ▏ 0.00\n",
       "\n",
       "0 ▏ 0.00\n",
       "1 ▏ 0.00\n",
       "2 ▏ 0.00\n",
       "3 ▏ 0.00\n",
       "4 ▏ 0.00\n",
       "5 ▏ 0.00\n",
       "6 ▏ 0.00\n",
       "7 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 1.00\n",
       "8 ▏ 0.00\n",
       "9 ▏ 0.00\n",
       "\n",
       "0 ▏ 0.00\n",
       "1 ▏ 0.00\n",
       "2 ▏ 0.00\n",
       "3 ▏ 0.00\n",
       "4 ▏ 0.00\n",
       "5 ▏ 0.00\n",
       "6 ▏ 0.00\n",
       "7 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 1.00\n",
       "8 ▏ 0.00\n",
       "9 ▏ 0.00\n",
       "\n",
       "0 ▏ 0.00\n",
       "1 ▏ 0.00\n",
       "2 ▏ 0.00\n",
       "3 ▏ 0.00\n",
       "4 ▏ 0.00\n",
       "5 ▏ 0.00\n",
       "6 ▏ 0.00\n",
       "7 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 1.00\n",
       "8 ▏ 0.00\n",
       "9 ▏ 0.00\n",
       "\n",
       "0 ▏ 0.00\n",
       "1 ▏ 0.00\n",
       "2 ▏ 0.00\n",
       "3 ▏ 0.00\n",
       "4 ▏ 0.00\n",
       "5 ▏ 0.00\n",
       "6 ▏ 0.00\n",
       "7 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 1.00\n",
       "8 ▏ 0.00\n",
       "9 ▏ 0.00\n",
       "\n",
       "0 ▏ 0.00\n",
       "1 ▏ 0.00\n",
       "2 ▏ 0.00\n",
       "3 ▏ 0.00\n",
       "4 ▏ 0.00\n",
       "5 ▏ 0.00\n",
       "6 ▏ 0.00\n",
       "7 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 1.00\n",
       "8 ▏ 0.00\n",
       "9 ▏ 0.00\n",
       "\n",
       "0 ▏ 0.00\n",
       "1 ▏ 0.00\n",
       "2 ▏ 0.00\n",
       "3 ▏ 0.00\n",
       "4 ▏ 0.00\n",
       "5 ▏ 0.00\n",
       "6 ▏ 0.00\n",
       "7 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 1.00\n",
       "8 ▏ 0.00\n",
       "9 ▏ 0.00\n",
       "\n",
       "0 ▏ 0.00\n",
       "1 ▏ 0.00\n",
       "2 ▏ 0.00\n",
       "3 ▏ 0.00\n",
       "4 ▏ 0.00\n",
       "5 ▏ 0.00\n",
       "6 ▏ 0.00\n",
       "7 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 1.00\n",
       "8 ▏ 0.00\n",
       "9 ▏ 0.00\n",
       "\n",
       "0 ▏ 0.00\n",
       "1 ▏ 0.00\n",
       "2 ▏ 0.00\n",
       "3 ▏ 0.00\n",
       "4 ▏ 0.00\n",
       "5 ▏ 0.00\n",
       "6 ▏ 0.00\n",
       "7 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 1.00\n",
       "8 ▏ 0.00\n",
       "9 ▏ 0.00\n",
       "Model        : Tensor Int64 [1] [ 7]\n",
       "Ground Truth : Tensor Int64 [1] [ 7]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "(displayImage' (fromLocalModel net) <=< getItem testMnistStream) 0"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "answering-institute",
   "metadata": {},
   "source": [
    "The model always \"sees\" digit 7. Note that the results we have provided are model-dependent. Therefore we also share our model for reproducibility. However, every realization of the\n",
    "stochastic model might be still different, especially in those cases where the\n",
    "entropy is high."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "local-concentrate",
   "metadata": {},
   "source": [
    "## Summary\n",
    "\n",
    "I hope you are now convinced that model's uncertainty estimation is an invaluable tool. This simple technique is essential when applying deep learning for real-life decision making. This post also develops on how to use Hasktorch library in practice. Notably, it is very straightforward to run computations on a GPU. Overall, Hasktorch can be used for real-world deep learning. The code is well-structured and relies on a mature Torch library. On the other hand, it would be desirable to capture high-level patterns so that the user does not need to think about low-level concepts such as dependent and independent tensors, for example. The end user should be able to simply apply `save net \"weights.bin\"` and `mynet <- load \"weights.bin\"` without any indirections. The same reasoning applies to the `trainLoop`, i.e. the user does not need to reinvent it every time. Eventually, a higher-level package on top of Hasktorch should capture the best practices, similar to [PyTorch Lightning](https://www.pytorchlightning.ai/) or [fast.ai](https://github.com/fastai/fastai).\n",
    "\n",
    "Now your turn: explore image recognition with [AlexNet](https://github.com/hasktorch/hasktorch/blob/master/examples/alexNet/AlexNet.hs) convolutional network and have fun! "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "vital-conservative",
   "metadata": {},
   "source": [
    "## Learn More\n",
    "\n",
    "* [Improving neural networks by preventing\n",
    "co-adaptation of feature detectors](https://arxiv.org/pdf/1207.0580.pdf)\n",
    "* [Dropout: A Simple Way to Prevent Neural Networks from\n",
    "Overfitting](https://www.jmlr.org/papers/volume15/srivastava14a/srivastava14a.pdf)\n",
    "* [Tutorial: Dropout as Regularization and Bayesian Approximation](https://xuwd11.github.io/Dropout_Tutorial_in_PyTorch/)\n",
    "* [Two Simple Ways To Measure Your Model’s Uncertainty](https://towardsdatascience.com/2-easy-ways-to-measure-your-image-classification-models-uncertainty-1c489fefaec8)\n",
    "* [Uncertainty in Deep Learning, Yarin Gal](http://mlg.eng.cam.ac.uk/yarin/thesis/thesis.pdf)\n",
    "* [AlexNet example in Hasktorch](https://github.com/hasktorch/hasktorch/tree/master/examples/alexNet)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Haskell",
   "language": "haskell",
   "name": "haskell"
  },
  "language_info": {
   "codemirror_mode": "ihaskell",
   "file_extension": ".hs",
   "mimetype": "text/x-haskell",
   "name": "haskell",
   "pygments_lexer": "Haskell",
   "version": "8.10.4"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
