{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Saddle point"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## What is a saddle point?\n",
    "\n",
    "A saddle point is a point where all the slopes and derivatives are all zero, but is not the minimum (or maximum) of the loss function. When the parameter is very close to the saddle point, the gradient gets extremely close to zero, and may slow down training. We call this phenomenon _\"stuck in the saddle point\"_, though visually speaking, it should be called _sitting on the saddle_."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## When do saddle points appear?\n",
    "\n",
    "Saddle points appear where all the dimension has zero derivative. Usually this means that there's a local minimum or local maximum (which is less likely, as gradients should point away from maximums). However, there are also chances that this is a saddle point."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Why not to worry about saddle points?\n",
    "\n",
    "Saddle points exist, no doubt. However, encountering one is very unlikely especially for very large nets. To have a chance of being stuck in a saddle point, we have to cross out fingers and hope that all (not some) parameters are stuck in their maximum. Sounds unlikely, right? Even if some parameters are stuck in their maximum, it usually does not matter when 99.999% of the parameters are in their minimum. (That maximum has to be huge!) With larger nets, it's even less likely that we encounter saddle points that do affect our training. So don't fear!"
   ]
  }
 ],
 "metadata": {
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": 3
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
