---
layout: post
title: Links on deep learning
date: '2015-05-04T14:26:00.001-07:00'
author: Alex
tags:
- Machine Learning
- Deep Learning
- Graphical Models
modified_time: '2015-05-15T08:00:18.032-07:00'
blogger_id: tag:blogger.com,1999:blog-307916792578626510.post-8150725402340336481
blogger_orig_url: http://brilliantlywrong.blogspot.com/2015/05/links-on-deep-learning.html
---

<p>Didn't know where to put it, so just for the memory will post it to blog. </p>
<ul>
    <li><a href='https://charlesmartin14.wordpress.com/2015/03/25/why-does-deep-learning-work/'>https://charlesmartin14.wordpress.com/2015/03/25/why-does-deep-learning-work/</a>
        <br/>There is much fuzz nowadays about why deep learning works at all (there is no any deep theory under today),
        and I love reading these hypothetical explanations (though I'm absolutely sure all of them are wrong. A good
        explanation of success should give you new ideas about what new things will work). <br> <br>In this couple of
        articles author is arguing that the action of RBM can be derived as an action of renormalization group. BTW,
        this is not a first physical analogy in neural network. Apart from RBMs which use Gibbs-like distribution, there
        were explanations of Hopfield neural networks via spin glasses and derivation of update rule from mean-field
        theory. <br/><br/></li>
    <li><a href='http://colah.github.io/posts/2014-07-NLP-RNNs-Representations/'>http://colah.github.io/posts/2014-07-NLP-RNNs-Representations/</a>
        <br/>This impressive post was written a year ago and shows recent fresh ideas about deep representations of
        objects. Namely, I was surprised with how common space of representation for different objects may help in
        translations. <br/><br/></li>
    <li><a href='http://ai.stanford.edu/~ang/papers/icml09-ConvolutionalDeepBeliefNetworks.pdf'>http://ai.stanford.edu/~ang/papers/icml09-ConvolutionalDeepBeliefNetworks.pdf</a><br/>
        Finally, a link to a paper where convolutional RBMs were introduced. Using softmax to combine with poll-layer is
        a good idea.
    </li>
</ul> <p>PS. Found a link of recommended reading for new LISA-lab students. <a
        href='http://www.datakit.cn/blog/2014/09/22/Reading_lists_for_new_LISA_students.html'>http://www.datakit.cn/blog/2014/09/22/Reading_lists_for_new_LISA_students.html </a>
</p>