<!DOCTYPE html>
<html lang="en">
<!-- Produced from a LaTeX source file.  Note that the production is done -->
<!-- by a very rough-and-ready (and buggy) script, so the HTML and other  -->
<!-- code is quite ugly!  Later versions should be better.                -->

<head>
  <meta charset="utf-8">
  <meta name="citation_title" content="Neural Networks and Deep Learning">
  <meta name="citation_author" content="Nielsen, Michael A.">
  <meta name="citation_publication_date" content="2015">
  <meta name="citation_fulltext_html_url" content="http://neuralnetworksanddeeplearning.com">
  <meta name="citation_publisher" content="Determination Press">
  <meta name="citation_fulltext_world_readable" content="">
  <link rel="icon" href="nnadl_favicon.ICO" />
  <title>Neural networks and deep learning</title>
  <script src="assets/jquery.min.js"></script>
  <script type="text/x-mathjax-config">
      MathJax.Hub.Config({
        tex2jax: {inlineMath: [['$','$']]},
        "HTML-CSS": 
          {scale: 92},
        TeX: { equationNumbers: { autoNumber: "AMS" }}});
    </script>
  <script type="text/javascript"
    src="cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script>


  <link href="assets/style.css" rel="stylesheet">
  <link href="assets/pygments.css" rel="stylesheet">
  <link rel="stylesheet" href="code.jquery.com/ui/1.11.2/themes/smoothness/jquery-ui.css">

  <style>
    /* Adapted from */
    /* https://groups.google.com/d/msg/mathjax-users/jqQxrmeG48o/oAaivLgLN90J, */
    /* by David Cervone */

    @font-face {
      font-family: 'MJX_Math';
      src: url('cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/fonts/HTML-CSS/TeX/eot/MathJax_Math-Italic.eot');
      /* IE9 Compat Modes */
      src: url('cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/fonts/HTML-CSS/TeX/eot/MathJax_Math-Italic.eot?iefix') format('eot'),
        url('cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/fonts/HTML-CSS/TeX/woff/MathJax_Math-Italic.woff') format('woff'),
        url('cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/fonts/HTML-CSS/TeX/otf/MathJax_Math-Italic.otf') format('opentype'),
        url('cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/fonts/HTML-CSS/TeX/svg/MathJax_Math-Italic.svg#MathJax_Math-Italic') format('svg');
    }

    @font-face {
      font-family: 'MJX_Main';
      src: url('cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/fonts/HTML-CSS/TeX/eot/MathJax_Main-Regular.eot');
      /* IE9 Compat Modes */
      src: url('cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/fonts/HTML-CSS/TeX/eot/MathJax_Main-Regular.eot?iefix') format('eot'),
        url('cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/fonts/HTML-CSS/TeX/woff/MathJax_Main-Regular.woff') format('woff'),
        url('cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/fonts/HTML-CSS/TeX/otf/MathJax_Main-Regular.otf') format('opentype'),
        url('cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/fonts/HTML-CSS/TeX/svg/MathJax_Main-Regular.svg#MathJax_Main-Regular') format('svg');
    }
  </style>

</head>

<body>
  <div class="nonumber_header">
    <h2>What this book is about</h2>
  </div>
  <div class="section">
    <div id="toc">
      <p class="toc_title"><a href="index.html">Neural Networks and Deep Learning</a></p>
      <p class="toc_not_mainchapter"><a href="about.html">What this book is about</a></p>
      <p class="toc_not_mainchapter"><a href="exercises_and_problems.html">On the exercises and problems</a></p>
      <p class='toc_mainchapter'><a id="toc_using_neural_nets_to_recognize_handwritten_digits_reveal" class="toc_reveal"
          onMouseOver="this.style.borderBottom='1px solid #2A6EA6';" onMouseOut="this.style.borderBottom='0px';"><img
            id="toc_img_using_neural_nets_to_recognize_handwritten_digits" src="images/arrow.png" width="15px"></a><a
          href="chap1.html">Using neural nets to recognize handwritten digits</a>
      <div id="toc_using_neural_nets_to_recognize_handwritten_digits" style="display: none;">
        <p class="toc_section">
        <ul><a href="chap1.html#perceptrons">
            <li>Perceptrons</li>
          </a><a href="chap1.html#sigmoid_neurons">
            <li>Sigmoid neurons</li>
          </a><a href="chap1.html#the_architecture_of_neural_networks">
            <li>The architecture of neural networks</li>
          </a><a href="chap1.html#a_simple_network_to_classify_handwritten_digits">
            <li>A simple network to classify handwritten digits</li>
          </a><a href="chap1.html#learning_with_gradient_descent">
            <li>Learning with gradient descent</li>
          </a><a href="chap1.html#implementing_our_network_to_classify_digits">
            <li>Implementing our network to classify digits</li>
          </a><a href="chap1.html#toward_deep_learning">
            <li>Toward deep learning</li>
          </a></ul>
        </p>
      </div>
      <script>
        $('#toc_using_neural_nets_to_recognize_handwritten_digits_reveal').click(function () {
          var src = $('#toc_img_using_neural_nets_to_recognize_handwritten_digits').attr('src');
          if (src == 'images/arrow.png') {
            $("#toc_img_using_neural_nets_to_recognize_handwritten_digits").attr('src', 'images/arrow_down.png');
          } else {
            $("#toc_img_using_neural_nets_to_recognize_handwritten_digits").attr('src', 'images/arrow.png');
          };
          $('#toc_using_neural_nets_to_recognize_handwritten_digits').toggle('fast', function () { });
        });</script>
      <p class='toc_mainchapter'><a id="toc_how_the_backpropagation_algorithm_works_reveal" class="toc_reveal"
          onMouseOver="this.style.borderBottom='1px solid #2A6EA6';" onMouseOut="this.style.borderBottom='0px';"><img
            id="toc_img_how_the_backpropagation_algorithm_works" src="images/arrow.png" width="15px"></a><a
          href="chap2.html">How the backpropagation algorithm works</a>
      <div id="toc_how_the_backpropagation_algorithm_works" style="display: none;">
        <p class="toc_section">
        <ul><a href="chap2.html#warm_up_a_fast_matrix-based_approach_to_computing_the_output
_from_a_neural_network">
            <li>Warm up: a fast matrix-based approach to computing the output
              from a neural network</li>
          </a><a href="chap2.html#the_two_assumptions_we_need_about_the_cost_function">
            <li>The two assumptions we need about the cost function</li>
          </a><a href="chap2.html#the_hadamard_product_$s_\odot_t$">
            <li>The Hadamard product, $s \odot t$</li>
          </a><a href="chap2.html#the_four_fundamental_equations_behind_backpropagation">
            <li>The four fundamental equations behind backpropagation</li>
          </a><a href="chap2.html#proof_of_the_four_fundamental_equations_(optional)">
            <li>Proof of the four fundamental equations (optional)</li>
          </a><a href="chap2.html#the_backpropagation_algorithm">
            <li>The backpropagation algorithm</li>
          </a><a href="chap2.html#the_code_for_backpropagation">
            <li>The code for backpropagation</li>
          </a><a href="chap2.html#in_what_sense_is_backpropagation_a_fast_algorithm">
            <li>In what sense is backpropagation a fast algorithm?</li>
          </a><a href="chap2.html#backpropagation_the_big_picture">
            <li>Backpropagation: the big picture</li>
          </a></ul>
        </p>
      </div>
      <script>
        $('#toc_how_the_backpropagation_algorithm_works_reveal').click(function () {
          var src = $('#toc_img_how_the_backpropagation_algorithm_works').attr('src');
          if (src == 'images/arrow.png') {
            $("#toc_img_how_the_backpropagation_algorithm_works").attr('src', 'images/arrow_down.png');
          } else {
            $("#toc_img_how_the_backpropagation_algorithm_works").attr('src', 'images/arrow.png');
          };
          $('#toc_how_the_backpropagation_algorithm_works').toggle('fast', function () { });
        });</script>
      <p class='toc_mainchapter'><a id="toc_improving_the_way_neural_networks_learn_reveal" class="toc_reveal"
          onMouseOver="this.style.borderBottom='1px solid #2A6EA6';" onMouseOut="this.style.borderBottom='0px';"><img
            id="toc_img_improving_the_way_neural_networks_learn" src="images/arrow.png" width="15px"></a><a
          href="chap3.html">Improving the way neural networks learn</a>
      <div id="toc_improving_the_way_neural_networks_learn" style="display: none;">
        <p class="toc_section">
        <ul><a href="chap3.html#the_cross-entropy_cost_function">
            <li>The cross-entropy cost function</li>
          </a><a href="chap3.html#overfitting_and_regularization">
            <li>Overfitting and regularization</li>
          </a><a href="chap3.html#weight_initialization">
            <li>Weight initialization</li>
          </a><a href="chap3.html#handwriting_recognition_revisited_the_code">
            <li>Handwriting recognition revisited: the code</li>
          </a><a href="chap3.html#how_to_choose_a_neural_network's_hyper-parameters">
            <li>How to choose a neural network's hyper-parameters?</li>
          </a><a href="chap3.html#other_techniques">
            <li>Other techniques</li>
          </a></ul>
        </p>
      </div>
      <script>
        $('#toc_improving_the_way_neural_networks_learn_reveal').click(function () {
          var src = $('#toc_img_improving_the_way_neural_networks_learn').attr('src');
          if (src == 'images/arrow.png') {
            $("#toc_img_improving_the_way_neural_networks_learn").attr('src', 'images/arrow_down.png');
          } else {
            $("#toc_img_improving_the_way_neural_networks_learn").attr('src', 'images/arrow.png');
          };
          $('#toc_improving_the_way_neural_networks_learn').toggle('fast', function () { });
        });</script>
      <p class='toc_mainchapter'><a id="toc_a_visual_proof_that_neural_nets_can_compute_any_function_reveal"
          class="toc_reveal" onMouseOver="this.style.borderBottom='1px solid #2A6EA6';"
          onMouseOut="this.style.borderBottom='0px';"><img
            id="toc_img_a_visual_proof_that_neural_nets_can_compute_any_function" src="images/arrow.png"
            width="15px"></a><a href="chap4.html">A visual proof that neural nets can compute any function</a>
      <div id="toc_a_visual_proof_that_neural_nets_can_compute_any_function" style="display: none;">
        <p class="toc_section">
        <ul><a href="chap4.html#two_caveats">
            <li>Two caveats</li>
          </a><a href="chap4.html#universality_with_one_input_and_one_output">
            <li>Universality with one input and one output</li>
          </a><a href="chap4.html#many_input_variables">
            <li>Many input variables</li>
          </a><a href="chap4.html#extension_beyond_sigmoid_neurons">
            <li>Extension beyond sigmoid neurons</li>
          </a><a href="chap4.html#fixing_up_the_step_functions">
            <li>Fixing up the step functions</li>
          </a><a href="chap4.html#conclusion">
            <li>Conclusion</li>
          </a></ul>
        </p>
      </div>
      <script>
        $('#toc_a_visual_proof_that_neural_nets_can_compute_any_function_reveal').click(function () {
          var src = $('#toc_img_a_visual_proof_that_neural_nets_can_compute_any_function').attr('src');
          if (src == 'images/arrow.png') {
            $("#toc_img_a_visual_proof_that_neural_nets_can_compute_any_function").attr('src', 'images/arrow_down.png');
          } else {
            $("#toc_img_a_visual_proof_that_neural_nets_can_compute_any_function").attr('src', 'images/arrow.png');
          };
          $('#toc_a_visual_proof_that_neural_nets_can_compute_any_function').toggle('fast', function () { });
        });</script>
      <p class='toc_mainchapter'><a id="toc_why_are_deep_neural_networks_hard_to_train_reveal" class="toc_reveal"
          onMouseOver="this.style.borderBottom='1px solid #2A6EA6';" onMouseOut="this.style.borderBottom='0px';"><img
            id="toc_img_why_are_deep_neural_networks_hard_to_train" src="images/arrow.png" width="15px"></a><a
          href="chap5.html">Why are deep neural networks hard to train?</a>
      <div id="toc_why_are_deep_neural_networks_hard_to_train" style="display: none;">
        <p class="toc_section">
        <ul><a href="chap5.html#the_vanishing_gradient_problem">
            <li>The vanishing gradient problem</li>
          </a><a href="chap5.html#what's_causing_the_vanishing_gradient_problem_unstable_gradients_in_deep_neural_nets">
            <li>What's causing the vanishing gradient problem? Unstable gradients in deep neural nets</li>
          </a><a href="chap5.html#unstable_gradients_in_more_complex_networks">
            <li>Unstable gradients in more complex networks</li>
          </a><a href="chap5.html#other_obstacles_to_deep_learning">
            <li>Other obstacles to deep learning</li>
          </a></ul>
        </p>
      </div>
      <script>
        $('#toc_why_are_deep_neural_networks_hard_to_train_reveal').click(function () {
          var src = $('#toc_img_why_are_deep_neural_networks_hard_to_train').attr('src');
          if (src == 'images/arrow.png') {
            $("#toc_img_why_are_deep_neural_networks_hard_to_train").attr('src', 'images/arrow_down.png');
          } else {
            $("#toc_img_why_are_deep_neural_networks_hard_to_train").attr('src', 'images/arrow.png');
          };
          $('#toc_why_are_deep_neural_networks_hard_to_train').toggle('fast', function () { });
        });</script>
      <p class='toc_mainchapter'><a id="toc_deep_learning_reveal" class="toc_reveal"
          onMouseOver="this.style.borderBottom='1px solid #2A6EA6';" onMouseOut="this.style.borderBottom='0px';"><img
            id="toc_img_deep_learning" src="images/arrow.png" width="15px"></a><a href="chap6.html">Deep learning</a>
      <div id="toc_deep_learning" style="display: none;">
        <p class="toc_section">
        <ul><a href="chap6.html#introducing_convolutional_networks">
            <li>Introducing convolutional networks</li>
          </a><a href="chap6.html#convolutional_neural_networks_in_practice">
            <li>Convolutional neural networks in practice</li>
          </a><a href="chap6.html#the_code_for_our_convolutional_networks">
            <li>The code for our convolutional networks</li>
          </a><a href="chap6.html#recent_progress_in_image_recognition">
            <li>Recent progress in image recognition</li>
          </a><a href="chap6.html#other_approaches_to_deep_neural_nets">
            <li>Other approaches to deep neural nets</li>
          </a><a href="chap6.html#on_the_future_of_neural_networks">
            <li>On the future of neural networks</li>
          </a></ul>
        </p>
      </div>
      <script>
        $('#toc_deep_learning_reveal').click(function () {
          var src = $('#toc_img_deep_learning').attr('src');
          if (src == 'images/arrow.png') {
            $("#toc_img_deep_learning").attr('src', 'images/arrow_down.png');
          } else {
            $("#toc_img_deep_learning").attr('src', 'images/arrow.png');
          };
          $('#toc_deep_learning').toggle('fast', function () { });
        });</script>
      <p class="toc_not_mainchapter"><a href="sai.html">Appendix: Is there a <em>simple</em> algorithm for
          intelligence?</a></p>
      <p class="toc_not_mainchapter"><a href="acknowledgements.html">Acknowledgements</a></p>
      <p class="toc_not_mainchapter"><a href="faq.html">Frequently Asked Questions</a></p>
      <hr>
      <span class="sidebar_title">Resources</span>

      <p class="sidebar"><a href="twitter.com/michael_nielsen">Michael Nielsen on Twitter</a></p>

      <p class="sidebar"><a href="faq.html">Book FAQ</a></p>

      <p class="sidebar">
        <a href="github.com/mnielsen/neural-networks-and-deep-learning">Code repository</a>
      </p>

      <p class="sidebar">
        <a href="http://eepurl.com/0Xxjb">Michael Nielsen's project announcement mailing list</a>
      </p>

      <p class="sidebar"> <a href="http://www.deeplearningbook.org/">Deep Learning</a>, book by Ian
        Goodfellow, Yoshua Bengio, and Aaron Courville</p>

      <p class="sidebar"><a href="http://cognitivemedium.com">cognitivemedium.com</a></p>

      <hr>
      <a href="http://michaelnielsen.org"><img src="assets/Michael_Nielsen_Web_Small.jpg" width="160px"
          style="border-style: none;" /></a>

      <p class="sidebar">
        By <a href="http://michaelnielsen.org">Michael Nielsen</a> / Dec 2019
      </p>
    </div>
    </p>
    <p>Neural networks are one of the most beautiful programming paradigms
      ever invented. In the conventional approach to programming, we tell
      the computer what to do, breaking big problems up into many small,
      precisely defined tasks that the computer can easily perform. By
      contrast, in a neural network we don't tell the computer how to solve
      our problem. Instead, it learns from observational data, figuring out
      its own solution to the problem at hand.</p>
    <p>Automatically learning from data sounds promising. However, until
      2006 we didn't know how to train neural networks to surpass more
      traditional approaches, except for a few specialized problems. What
      changed in 2006 was the discovery of techniques for learning in
      so-called deep neural networks. These techniques are now known as
      deep learning. They've been developed further, and today deep neural
      networks and deep learning achieve outstanding performance on many
      important problems in computer vision, speech recognition, and natural
      language processing. They're being deployed on a large scale by
      companies such as Google, Microsoft, and Facebook.</p>
    <p>The purpose of this book is to help you master the core concepts of
      neural networks, including modern techniques for deep learning. After
      working through the book you will have written code that uses neural
      networks and deep learning to solve complex pattern recognition
      problems. And you will have a foundation to use neural networks and
      deep learning to attack problems of your own devising.</p>
    <p>
    <h3><a name="a_principle-oriented_approach"></a><a href="#a_principle-oriented_approach">A principle-oriented
        approach</a></h3>
    </p>
    <p>One conviction underlying the book is that it's better to obtain a
      solid understanding of the core principles of neural networks and deep
      learning, rather than a hazy understanding of a long laundry list of
      ideas. If you've understood the core ideas well, you can rapidly
      understand other new material. In programming language terms, think
      of it as mastering the core syntax, libraries and data structures of a
      new language. You may still only "know" a tiny fraction of the
      total language - many languages have enormous standard libraries -
      but new libraries and data structures can be understood quickly and
      easily.</p>
    <p>This means the book is emphatically not a tutorial in how to use some
      particular neural network library. If you mostly want to learn your
      way around a library, don't read this book! Find the library you wish
      to learn, and work through the tutorials and documentation. But be
      warned. While this has an immediate problem-solving payoff, if you
      want to understand what's really going on in neural networks, if you
      want insights that will still be relevant years from now, then it's
      not enough just to learn some hot library. You need to understand the
      durable, lasting insights underlying how neural networks work.
      Technologies come and technologies go, but insight is forever.</p>
    <p>
    <h3><a name="a_hands-on_approach"></a><a href="#a_hands-on_approach">A hands-on approach</a></h3>
    </p>
    <p>We'll learn the core principles behind neural networks and deep
      learning by attacking a concrete problem: the problem of teaching a
      computer to recognize handwritten digits. This problem is extremely
      difficult to solve using the conventional approach to programming.
      And yet, as we'll see, it can be solved pretty well using a simple
      neural network, with just a few tens of lines of code, and no special
      libraries. What's more, we'll improve the program through many
      iterations, gradually incorporating more and more of the core ideas
      about neural networks and deep learning.</p>
    <p>This hands-on approach means that you'll need some programming
      experience to read the book. But you don't need to be a professional
      programmer. I've written the code in Python (version 2.7), which,
      even if you don't program in Python, should be easy to understand with
      just a little effort. Through the course of the book we will develop
      a little neural network library, which you can use to experiment and
      to build understanding. All the code is available for download
      <a href="github.com/mnielsen/neural-networks-and-deep-learning">here</a>.
      Once you've finished the book, or as you read it, you can easily pick
      up one of the more feature-complete neural network libraries intended
      for use in production.
    </p>
    <p>On a related note, the mathematical requirements to read the book are
      modest. There is some mathematics in most chapters, but it's usually
      just elementary algebra and plots of functions, which I expect most
      readers will be okay with. I occasionally use more advanced
      mathematics, but have structured the material so you can follow even
      if some mathematical details elude you. The one chapter which uses
      heavier mathematics extensively is <a href="chap2.html">Chapter 2</a>, which
      requires a little multivariable calculus and linear algebra. If those
      aren't familiar, I begin <a href="chap2.html">Chapter 2</a> with a
      discussion of how to navigate the mathematics. If you're finding it
      really heavy going, you can simply skip to the
      <a href="chap2.html#the_backpropagation_algorithm">summary</a> of the
      chapter's main results. In any case, there's no need to worry about
      this at the outset.
    </p>
    <p>It's rare for a book to aim to be both principle-oriented and
      hands-on. But I believe you'll learn best if we build out the
      fundamental ideas of neural networks. We'll develop living code, not
      just abstract theory, code which you can explore and extend. This way
      you'll understand the fundamentals, both in theory and practice, and
      be well set to add further to your knowledge.</p>
    <p></p>
    <p></p>
    <p></p>
    <p></p>
    <p></p>
    <p></p>
    <p>
  </div>
  <div class="footer"> <span class="left_footer"> In academic work,
      please cite this book as: Michael A. Nielsen, "Neural Networks and
      Deep Learning", Determination Press, 2015

      <br />
      <br />

      This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc/3.0/deed.en_GB"
        style="color: #eee;">Creative Commons Attribution-NonCommercial 3.0
        Unported License</a>. This means you're free to copy, share, and
      build on this book, but not to sell it. If you're interested in
      commercial use, please <a href="mailto:mn@michaelnielsen.org">contact me</a>.
    </span>
    <span class="right_footer">
      Last update: Thu Dec 26 15:26:33 2019
      <br />
      <br />
      <br />
      <a rel="license" href="http://creativecommons.org/licenses/by-nc/3.0/deed.en_GB"><img
          alt="Creative Commons Licence" style="border-width:0"
          src="http://i.creativecommons.org/l/by-nc/3.0/88x31.png" /></a>
    </span>

</body>

</html>