{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# The Deep Learning Book (Simplified)\n",
    "## Part II - Modern Practical Deep Networks\n",
    "*This is a series of blog posts on the [Deep Learning book](http://deeplearningbook.org)\n",
    "where we are attempting to provide a summary of each chapter highlighting the concepts that we found to be the most important so that other people can use it as a starting point for reading the chapters, while adding further explanations on few areas that we found difficult to grasp. Please refer [this](http://www.deeplearningbook.org/contents/notation.html) for more clarity on \n",
    "notation.*\n",
    "\n",
    "\n",
    "## Chapter 11: Practical Methodology\n",
    "\n",
    "We are excited to say that this is going to be the last chapter that we cover before entering the Deep Learning Research section of the book which is, for the most part, unfamiliar terrains for us. A lot of what we'd been talking about till now covered the theoretical aspects of Deep Learning. However, there's a large gap between theory and what works in practice. This chapter is specifically dedicated to practitioners and people who are looking to apply Deep Learning for building cool applications and solving real-world problems. \n",
    "\n",
    "The various choices that one might need to make include which type of data to gather, where would they find that data, should they gather more data, change model complexities, change (add/remove) regularization, improve optimization, debug the software implementation, etc. The recommended practical design process is as follows:\n",
    "\n",
    "- Decide on a a single number metric to evaluate your model. This represents the final goal and you need to set a specific target that you want to achieve. Coming from Andrew Ng's Machine Learning Yearning and also from personal experience, most teams forget to decide upon this only to realize the mistake very late in the process that setting this up would have gave them a clear guide on what they wanted to improve.\n",
    "\n",
    "- Get an end-to-end pipeline working as soon as possible, including the evaluation of the required metrics. This will, more often than not, require that you use a very simple model that can accept the inputs correctly and produce the outputs in the right format that can be further used for training / evaluation / analysis. The major benefit here is that now you can solely focus on improving the model and on doing any specific change, you can instantly get the final results and check whether that change improved the model or not.\n",
    "\n",
    "![pipeline](images/workflow_final2.png)\n",
    "\n",
    "- Instrument the system well to determine bottlenecks in performance which requires diagnosing which components perform worse than expected and understanding the reason behind poor performance - overfitting, underfitting, modelling, problems in data, software implementation errors, etc.\n",
    "\n",
    "- Based on the diagnosis above, keep improving the algorithm iteratively either by adding more data, increasing the capacity of the model, tuning hyperparameters or improving the quality of data by better annotation, etc.\n",
    "\n",
    "The chapter is organized as follows:\n",
    "\n",
    "**1. Performance Metrics** <br>\n",
    "**2. Default Baseline Models** <br>\n",
    "**3. Determining Whether to Gather More Data** <br>\n",
    "**4. Selecting Hyperparameters** <br>\n",
    "**5. Debugging Strategies**"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 1. Performance Metrics\n",
    "\n",
    "As mentioned above, it's extremely important to decide on which error metric to use as that will ultimately guide you on how to make progress. It should be sufficiently representative of the end goal that you are trying to achieve. Let me give you an example. Suppose you are working on a [Semantic Segmentation](http://blog.qure.ai/notes/semantic-segmentation-deep-learning-review) problem where we want to assign a class to each pixel of the input image. The image below demonstrates how the output should look like given an input image:\n",
    "\n",
    "![semantic seg](images/ss2-original.png)\n",
    "\n",
    "In the figure above, all the pixels constituting the man has been marked as one class, those representing the bicycle as another class and the remaining ones as the background class.\n",
    "\n",
    "To simplify, let's consider a binary segmentation task where class 1 represents \"man\" and class 0 represents the background class. Thus, the expected output now becomes:\n",
    "\n",
    "![semantic seg](images/ss2.png)\n",
    "\n",
    "Notice that the pixels belonging to the bicycle class are also labelled as 0 as we are considering only a binary semantic segmentation task. Thus, the bicycle class now comes under the background class\n",
    "\n",
    "Now,  what would be a reasonable metric to choose here that would be representative of the final goal here? - Think before looking down.\n",
    "\n",
    "A default metric to start off with is [accuracy](https://en.wikipedia.org/wiki/Accuracy_and_precision), which indicates the percentage of pixels where our model predicted the right class. Although the example above has a fairly equal number of class 1 and class 0 pixels, this need not be the case. There can be images where there is a single person in the image or there can be a lot of images with no people. In such cases where there is a high [class imbalance](http://www.chioka.in/class-imbalance-problem/), a very simple way to achieve a high accuracy could be to always predict the class 0. However, it's clearly not a good classifier although it might get 90% accuracy. You'd ideally like a metric that is not dependent on the distribution of the classes in the dataset. For this reason, the most commonly used metric for semantic segmentation is **Intersection over Union (IoU)**, which is defined as follows:\n",
    "\n",
    "![iou](images/iou.png)\n",
    "\n",
    "The image below shows why IoU is a good metric:\n",
    "\n",
    "![iou validation](images/iou_examples.png)\n",
    "\n",
    "In the first case, although the red box is almost entirely within the green one, but their union is high and that makes the IoU low. However, in the other two cases, the intersection nears the union more and more\n",
    "\n",
    "One more thing that I'd like to point out before moving on from this example is that it is equally useful in images where there is no object, i.e. the entire image represents the background class. This is achieved by adding a small $\\epsilon$ to both the numerator and denominator during the calculation of IoU. Now, if the ground truth contained only background class and our model predicted that as well, the intersection as well as the union is 0. Thus, the IoU becomes (0 + $\\epsilon$) / (0 + $\\epsilon$) = $\\epsilon$ / $\\epsilon$ = 1. This indicates the importance of choosing the right metric.\n",
    "\n",
    "Then there can be problems where one type of mistake is more costly than another. In the case of spam detection, classifying a spam mail as non-spam is much less costlier than classifying a non-spam message as spam. In such cases, instead of measuring the error rate, we might be interested in observing some form of total cost which is representative of our problem.\n",
    "\n",
    "Similar to the semantic segmentation problem described above, there are many cases where there is a large class imbalance. For example, in a particular sample of population, one out of a 1000 people might have cancer. Thus, 9999 people don't have cancer. If I simply use a classifier that classifier everyone as not having cancer, I can achieve an accuracy of 99.99%. But would you be willing to use this classifier for testing yourself?\n",
    "\n",
    "Definitely not. In such a case, accuracy is a bad metric. We instead use **precision** and **recall** to evaluate our classifier. I generally use this figure to remember what both of them mean:\n",
    "\n",
    "![pr](images/precision_recall.png)\n",
    "\n",
    "Precision represents the fraction of detections that were actually true, whereas Recall stands for the the fraction of true events that were successfully detected.\n",
    "\n",
    "![pr equation](images/pr_equation.png)\n",
    "\n",
    "Now, consider that if a detector says that all the cases are not cancer will achieve the perfect precision, but 0 recall. Many a times, it's actually desirable to have a single metric to judge on, rather than have a trade-off between two of them. F1-score, which is the Harmonic Mean of Precision & Recall is a widely accepted metric:\n",
    "\n",
    "![f1 equation](images/f1.png)\n",
    "\n",
    "However, F1-score gives equal weightage to both precision and recall. There can be cases where you want to weigh one over the other and hence, we have the more general, F-beta score:\n",
    "\n",
    "![fbeta equation](images/fbeta-score.jpg)\n",
    "\n",
    "Also, in some cases the machine learning algorithm can refuse to make any decision at all in cases where it's not very confident about it's decision. This can be important in situations where a misclassification can be harmful and it'd be much better for a human to have a look. Then again, a ML system is useful only when it significantly reduces the number of instances that a human operator must process. A natural performance metric here is **coverage**, which stands for the fraction of images that where the machine learning system is able to produce a response.\n",
    "\n",
    "In most applications, it might not be possible to achieve absolute zero error even after having infinite data either due to the features not being sufficiently representative or the system being intrinsically stochastic. The minimum amount of error possible for a system is called the Bayes' error for the system.\n",
    "\n",
    "A major bottleneck to performance is often the fact that training data is limited. Now, once away from standard datasets like [MNIST](http://yann.lecun.com/exdb/mnist/) into more real-world problems, you'll realize that getting accurate data is much more harder than it initially seems and in most often, doesn't come for free as well. So, you really need to analyze how much is additional data going to improve your performance metric. I'll try to explain this with an example. As mentioned in Andrew Ng's book, [Machine Learning Yearning](http://www.mlyearning.org), a standard method of error analysis to actually observe (say) 100 examples where your model is failing and then checking which classes actually account for the maximum % of error:\n",
    "\n",
    "![error analysis](images/ng_error_analysis.png)\n",
    "\n",
    "From the figure above, it can be clearly seen that collecting more Dog images might improve the error rate by 8% at max. However, collecting more Blurry images might potentially improve the error rate by 61%, which is very significant. Thus, it makes sense to spend time on collecting more blurry images and doing this exercise would save you the embarrassment of spending months on collecting better Dog images only to see the error rate improve by 8%. For more such tips, refer to the book linked above.\n",
    "\n",
    "The bottom line being that you need to decide what is realistic desired error rate for your intended application beforehand and use that to guide your design decisions in the future."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2. Default Baseline models\n",
    "\n",
    "As mentioned at the start, it is very important to establish a working end-to-end system as soon as possible. Depending on the complexity of the problem, we might even choose to begin with a very simplistic model like logistic regression. However, if the problem that you intend to solve falls under the [\"AI-complete\"](https://en.wikipedia.org/wiki/AI-complete) category like Image Classification, Speech Recognition, etc., starting off with a deep learning model would almost always be better.\n",
    "\n",
    "You first begin with choosing the general category of model to use based on the structure of your data. If your data consists of fixed-size vectors and you intend to perform a supervised learning task, use a multi-layer perceptron. If your data has a fixed topological structure, using a [Convolutional Neural Network](https://medium.com/inveterate-learner/deep-learning-book-chapter-9-convolutional-networks-45e43bfc718d) might be the best way forward. Similarly, if your data has a sequential pattern, [Recurrent Neural Networks](https://en.wikipedia.org/wiki/Recurrent_neural_network) would be the ideal starting point. However, the [speed](https://www.technologyreview.com/s/513696/deep-learning/) at which Deep Learning as a field is progressing, default algorithms are likely to change. For example, 3–4 years ago [AlexNet](https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf) would have been the ideal starting point for image-based tasks. However, now [ResNets](https://arxiv.org/abs/1512.03385) are the widely accepted default choice.\n",
    "\n",
    "![imagenet](images/imagenet_progress.png)\n",
    "ImageNet top-5 error progress over the years. AlexNet had 8 layers where the more powerful ResNet has more than 150 layers. Source: https://medium.com/@RaghavPrabhu/cnn-architectures-lenet-alexnet-vgg-googlenet-and-resnet-7c81c017b848\n",
    "\n",
    "For training the model, a reasonable starting point is to use the Adam optimizer. Apart from this, SGD with momentum and a learning rate decay is widely used too where the learning rate is decayed exponentially until a point and then, reduced linearly by a factor of 2–10 each time validation error plateaus. [Batch-Normalization](https://medium.com/inveterate-learner/deep-learning-book-chapter-8-optimization-for-training-deep-models-part-ii-438fb4f6d135#5bbf) would, in general, always improve performance by providing stability and allowing the use of larger learning rate thereby helping to reach convergence faster. \n",
    "\n",
    "As you increase your model complexity, you'll eventually become prone to overfitting since your training data is limited. Thus, it's always advised to add some [regularization](https://medium.com/inveterate-learner/deep-learning-book-chapter-7-regularization-for-deep-learning-937ff261875c) to your model as well. Common choices include L2-penalty to the loss function, Dropout between layers, Early stopping and Batch-Normalization. Using Batch-Normalization [allows the omission](http://forums.fast.ai/t/batch-normalisation-vs-dropout/5172) of Dropout. If you missed our post on regularization, feel free to go through it where all of these have been [explained in detail](https://medium.com/inveterate-learner/deep-learning-book-chapter-7-regularization-for-deep-learning-937ff261875c).\n",
    "\n",
    "If your task is reasonably similar to any other task where prior work has been done, it is advised to just copy the model (along with the weights) from the latter and use that as an initialization point for your task. This way of training is called **transfer learning**. For example, in the famous [Dogs Vs Cats Image Classification challenge on Kaggle](https://www.kaggle.com/c/dogs-vs-cats), a model pretrained on ImageNet which contained similar images, was used as the starting point to achieve the best performance, rather than training the model from scratch.\n",
    "\n",
    "![transfer learning](images/transfer learning.png)\n",
    "Here, the large dataset of object images refers to ImageNet. Source: https://towardsdatascience.com/transfer-learning-using-differential-learning-rates-638455797f00\n",
    "\n",
    "Finally, some domains like Natural Language Processing (NLP) benefit tremendously from using unsupervised learning methods during initialization. In the current trend of Deep Learning applied to NLP, it's common to represent each word as an embedding (vector) and there exist unsupervised learning methods like [word2vec](https://en.wikipedia.org/wiki/Word2vec) and [GLoVe](https://nlp.stanford.edu/projects/glove/) for learning these word embeddings."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 3. Determining Whether to Gather more data\n",
    "\n",
    "A rookie mistake that a lot of people make is that they keep trying different algorithms to improve the performance of their models, whereas simply improving the data they have or gathering more data can be the best source of improvement. We touched upon the topic of how to decide when to get more data, but since data is the most integral part of getting an AI solution working, we'll explore this in a bit more detail now.\n",
    "So, how do you decide when to get more data? Firstly, if the performance of your model on your training set is poor, it is not making full use of the information present in your data and in this case, you need to increase the complexity of your model by adding more layers or increasing the number of hidden units in each layer. Also, hyperparameter tuning is an important step to perform. You'd be surprised how large an effect choosing the right hyperparameters can have in getting your model working. For example, learning rate is THE [most important](https://medium.com/inveterate-learner/deep-learning-book-chapter-8-optimization-for-training-deep-models-part-i-20ae75984cb2#7da2) hyperparameter that you need to tune. Setting the right value of the learning rate for your problem can save you loads of hours of wasted effort. However, if your model is reasonably complex and optimization carefully tuned but still the performance is not up to the desired level, the problem might be the quality of data instead, in which you have to go back to square one and start collecting cleaner data.\n",
    "If the training error is low but the validation error is much higher, then you can safely assume that your best would be to say:\n",
    "\n",
    "![data meme](images/data_meme.jpg)\n",
    "\n",
    "The specific situation mentioned above, where training error is low but test error is high, is called overfitting and is one of the most commonly occurring problems in training deep models, in which case regularization might help. To reinforce the importance of data in the modern deep networks, for those who might not be aware, the reason that Deep Learning started gaining attention was the ImageNet competition where a deep learning model outperformed the previous best model by a significantly large margin in 2012. ImageNet consists of millions of annotated images and the creation of similar large labelled datasets is the reason that extremely complex problems like object detection have become solved problems today.\n",
    "Finally, it's generally observed that adding a small fraction of the total number of examples won't have a noticeable effect on the performance. Thus, we need to monitor how much the performance of a model improves as the dataset size increases and it should be monitored at a logarithmic scale.\n",
    "\n",
    "![train dev](images/train_dev.png)\n",
    "\n",
    "As can been seen from the plot above, the training error will generally increase as you increase the dataset size. This is because the model will find it harder to fit to all the datapoints exactly now. Also, by increasing the dataset size, your validation (dev) error will decrease as the model would learn to be more generalized now."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 4. Selecting Hyperparameters\n",
    "\n",
    "Most deep learning algorithms have a lot of hyperparameters that need to be chosen correctly. Different hyperparameters control different aspects of the model. Some affect the memory cost like the number of layers to use while others affect the performance like the keep probability for Dropout, learning rate, momentum, etc. Broadly, there are two approaches to choosing these hyperparameters. The first one is to choose them manually which involves understanding what the hyperparameters do and how they affect training and generalization. The other approach is to choose the hyperparameters automatically, which reduces the complexity a lot but comes at the cost of compute power. We'll discuss these two approaches in more details now:\n",
    "\n",
    "**i) Manual Hyperparameter Tuning:**\n",
    "\n",
    "As briefly mentioned above, manual hyperparameter tuning requires a lot of domain knowledge and fundamental understanding of training error, generalization error, learning theory, etc. The primary aim of manual hyperparameter tuning is to achieve effective capacity to match the complexity of the task by trading off memory and runtime. Factors influencing the effective capacity are representational capacity of the model, ability of the learning algorithm to minimize the cost function used to train the model and the degree to which the cost function and the training procedure regularize the model. \n",
    "\n",
    "The generalization error typically follows a U-shaped curve as shown below:\n",
    "![u curve](images/u-curve.png)\n",
    "\n",
    "On the extreme left, we are in the underfitting regime where the capacity of the model is low and both training and generalization errors are high. On the extreme right, we enter the overfitting regime where the training error is low but the gap between the training error and test error is high. The optimal spot is somewhere in the middle where we trade-off a slightly higher training error for the lowest possible generalization error. \n",
    "\n",
    "Many hyperparameters affect overfitting (or underfitting) and in different ways. For e.g. increasing certain hyperparameters like, the number of hidden units, increases the chances of overfitting, whereas increasing others like weight decay reduces. Some of them are discrete like the number of hidden units, whereas others might be binary like whether to use Batch Normalization or not. Some hyperparameters have bounds that implicitly restrict them, like the weight decay coefficient which can only *reduce* capacity. Thus, if the model is underfitting, you can't get it to overfit by adding weight decay.\n",
    "\n",
    "As mentioned before, if you can tune only one hyperparameter, tune the learning rate. The effective capacity of the model is the highest at the right learning rate, neither too high nor too low. We discuss in more details about the effect of learning rate on the training process in one of our [earlier posts](https://medium.com/inveterate-learner/deep-learning-book-chapter-8-optimization-for-training-deep-models-part-i-20ae75984cb2#7da2), but to summarize: setting the learning rate too low slows training and might even cause the algorithm to get stuck in local minima; setting it too high might make the training unstable due to wild oscillations.\n",
    "\n",
    "![lr values](images/lr_high_low.png)\n",
    "\n",
    "If the training error is high, general approach is to add more layers or more hidden units to increase the capacity. If the training error is low but the test error is high, you need to reduce the gap between the train and test errors without increasing the training error too much. Usually, a sufficiently large model which is well-regularized (for e.g. by using Dropout, Batch Normalization, weight decay, etc.) works the best.\n",
    "\n",
    "The two broad approaches to achieve the final goal of a low generalization are: adding regularization to the model and increasing the dataset size. The table below shows how each hyperparameter affects capacity:\n",
    "\n",
    "![hyperparam cap](images/hyperparam_capacity.png)\n",
    "\n",
    "**ii) Automatic Hyperparameter Optimization Algorithms:** Hyperparameter tuning can be viewed as an optimization process itself which optimizes an objective function, such as the validates, sometimes under constraints like training time, memory limits, etc. Thus, we can design *Hyperparameter Optimization (HO)* algorithms that wrap a learning algorithm and choose its hyperparameters. Unfortunately, these HO algorithms have their own set of hyperparameters, but these are generally easier to choose as would be discussed now:\n",
    "\n",
    "*Grid Search*: For grid search, first pick a range of values that you feel is suitable for each hyperparameter. Then, you train the model for each possible combination of the values of the hyperparameters. To simplify, if you have 2 hyperparameters and pick a range of N values for each of them, you'll need to train the model for all the possible $N^2$ combinations. You generally set the maximum and minimum of the range based on your understanding (and/or experience) and then choose the values in between, generally, on a logarithmic scale. For e.g. possible values for learning rate: {0.1, 0.01, 0.001}, number of hidden units: {50, 100, 200, 400}, etc.\n",
    "\n",
    "Also, Grid search works best when performed repeatedly. E.g. if the range that you set was {0.1, 0, 1} and the best performing one was 1, you probably set the range wrong and you should check again for a higher range like {1, 2, 3}. In case the best performing value comes out as 0, then you should do a more refined search between {-0.1, 0, 0.1}. \n",
    "The main problem with Grid Search is the computational cost. If there are m hyperparameters to be tuned, and each of them can take N values, the number of training and evaluation trials grows as O($N^m$).\n",
    "\n",
    "*Random Search*: A better and faster approach is something known as random search. In this case, you define some distribution over the choice of values, e.g.binomial for binary, multinomial for discrete, uniform on a log-scale for say, learning rate:\n",
    "\n",
    "![random search](images/random_search_dist.png)\n",
    "\n",
    "Then, for each run, randomly sample the value of each hyperparameter based on its distribution. This can prove to be exponentially more efficient than grid search. The figure below explains this:\n",
    "\n",
    "![grid random search](images/grid_random_search.png)\n",
    "\n",
    "To make it clearer, the main reason that random search reduces validation error faster than grid search is that it doesn't perform any wasted computation. Since grid search goes over all possible combinations, it'll evaluate cases where the value of only one hyperparameter changes, with the values of the rest being the same. Now, if this hyperparameter doesn't affect the performance too much, then grid search has performed a wasted evaluation. However, in the case of random search, for different values of a hyperparameter, the values of the rest of the hyperparameters would most likely also be different. Thus, random search doesn't do any wasted evaluation.\n",
    "\n",
    "*Model-based Hyperparameter Optimization*: As mentioned briefly above, hyperparameter tuning can be viewed as an optimization process. In simplified settings, it might be possible to take the gradient of some differentiable error measure on the validation set with respect to the hyperparameters and simply use Gradient Descent. However, in most practical settings, this gradient is not unavailable. To compensate for this, you can build a model of the validation error and perform optimization on this model. A general approach is to build a Bayesian regression model to estimate the expected value of the validation error along with the uncertainty around this estimate. Bayesian Hyperparameter Optimization (BHO) is still nascent and not sufficiently reliable. One major drawback compared to random search would be that BHO requires each experiment to go till completion to be able to extract any information out of it, whereas in many cases it might be clearly visible at the initial stages itself that that particular set of hyperparameters doesn't work."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 5. Debugging Strategies\n",
    "\n",
    "- *Visualize the model in action*: This is one of the best ways to verify if the training is going correctly and also, understanding which areas might need improvement. Once the training starts, visualize the output of your model after a few epochs. If you're working on a semantic segmentation problem, look at the segmentation output. If you're training a generative model of speech, listen to a few sample of speech that it produces. Also, it's common to have bugs in the evaluation metric as they might need corner-case handling which you might not have taken care of. Evaluation bugs are the hardest ones to catch and they fool you into believing that your model is performing/not performing well.\n",
    "\n",
    "![model output](images/model_output.png)\n",
    "\n",
    "- *Visualize the worst mistakes*: Going back to the semantic segmentation problem above, suppose we run the model on our test set. Based on the IoU scores, we can sort the samples to identify where our model performed the worst. Visualizing those examples where the model fails terribly, is a great way to identify errors in data processing/annotation. In the case where you infer that the problem had been with the annotation of data, the best way to improve performance would be to actually correct the annotations, even manually if required, as the payoff of having the correct data is very high.\n",
    "\n",
    "![google mistake](images/google_mistake.jpg)\n",
    "\n",
    "Google misclassified the photo of humans as that of gorillas. It came under some scrutiny for having this bias in its algorithms\n",
    "\n",
    "- *Fit a tiny dataset*: Before starting to train on your entire training set, always fit your model to a small subset of the entire dataset. Even very simple models will overfit to a handful of examples. Taking the extreme case of a single example, it's very easy to correctly fit to it by setting the weights to zero and the biases appropriately. From my practical experience too, if you're making a modification or trying something different, first make sure that it can overfit on a small enough dataset. If it can't, then there's a high probability that there's been a software bug in setting up the training process.\n",
    "\n",
    "- *Monitor histograms of activations and gradients*: It can be useful to monitor the pre-activation values of hidden units in case there is a problem in training. What to monitor depends on the type of activation function used. For example, in the case of ReLU (commonly used between layers), we can check how often is the unit off (which would happen if the pre-activation value is < 0). In the case of sigmoidal units, it can be useful to check how often does it stay  in the saturated regions, i.e. either too positive or too negative. Also, if the gradients grow or vanish too quickly, it can be a problem during training. It has advised in the book that the magnitude of the gradient should be approximately 1% of the magnitude of the parameter, neither too high (50%) nor too low (0.001%). Thus, comparing the two magnitudes can be a good approach for debugging too. \n",
    "Finally, it can be shown (covered in later chapters) that some optimization algorithms provide certain guarantees, like the objective function not increasing after each epoch, all the gradients being zero at convergence, etc. and we can ensure that these guarantees are met."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 2",
   "language": "python",
   "name": "python2"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 2
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython2",
   "version": "2.7.12"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
