
<!DOCTYPE html>
<html lang="" class="loading">
<head><meta name="generator" content="Hexo 3.8.0">
    <meta charset="UTF-8">
    <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
    <meta name="viewport" content="width=device-width, minimum-scale=1.0, maximum-scale=1.0, user-scalable=no">
    <title>LOST-CD - zw</title>

    <meta name="apple-mobile-web-app-capable" content="yes">
    <meta name="apple-mobile-web-app-status-bar-style" content="black-translucent">
    <meta name="google" content="notranslate">
    <meta name="keywords" content="zwCDcc,"> 
    <meta name="description" content="asol,"> 
    <meta name="author" content="zwcdcc"> 
    <link rel="alternative" href="atom.xml" title="LOST-CD" type="application/atom+xml"> 
    <link rel="icon" href="/img/favicon.png"> 
    <link rel="stylesheet" href="//cdn.jsdelivr.net/npm/gitalk@1/dist/gitalk.css">
    <link rel="stylesheet" href="/css/diaspora.css">
</head>
</html>
<body class="loading">
    <div id="loader"></div>
    <div id="single">
    <div id="top" style="display: block;">
    <div class="bar" style="width: 0;"></div>
    <a class="icon-home image-icon" href="javascript:;"></a>
    <div title="播放/暂停" class="icon-play"></div>
    <h3 class="subtitle">Machine Learning</h3>
    <div class="social">
        <!--<div class="like-icon">-->
            <!--<a href="javascript:;" class="likeThis active"><span class="icon-like"></span><span class="count">76</span></a>-->
        <!--</div>-->
        <div>
            <div class="share">
                <a title="获取二维码" class="icon-scan" href="javascript:;"></a>
            </div>
            <div id="qr"></div>
        </div>
    </div>
    <div class="scrollbar"></div>
</div>
    <div class="section">
        <div class="article">
    <div class="main">
        <h1 class="title">Machine Learning</h1>
        <div class="stuff">
            <span>十一月 18, 2018</span>
            
  <ul class="post-tags-list"><li class="post-tags-list-item"><a class="post-tags-list-link" href="/tags/ML/">ML</a></li></ul>


        </div>
        <div class="content markdown">
            <h3 id="Catalog"><a href="#Catalog" class="headerlink" title="Catalog"></a>Catalog</h3><ol>
<li><a href="#What-is-Machine-Learning">What is Machine Learning?</a>  </li>
<li><a href="#Model-Representation">Model Representation</a>  </li>
<li><a href="#Supervised-Learning">Supervised Learning</a>  <ol>
<li><a href="#Linear-Regression">Linear Regression</a>  </li>
<li><a href="#Logistic-Regression">Logistic Regression</a>  </li>
<li><a href="#Neural-Networks">Neural Networks</a>  </li>
<li><a href="#Support-Vector-Machines">Support Vector Machines</a>  </li>
</ol>
</li>
<li><a href="#Unsupervised-Learning">Unsupervised Learning</a>  <ol>
<li><a href="#Clustering">Clustering</a>  </li>
<li><a href="#Dimensionality-Reduction">Dimensionality Reduction</a>  </li>
<li><a href="#Anomaly-Detection">Anomaly Detection</a>  </li>
<li><a href="#Recommender-System">Recommender System</a>  </li>
</ol>
</li>
<li><a href="#The-Operation-Of-Optimize">The Operation Of Optimize</a>  <ol>
<li><a href="#Feature-Scaling">Feature Scaling</a>  </li>
<li><a href="#Normal-Equation">Normal Equation</a>  </li>
<li><a href="#Advanced-Optimization">Advanced Optimization</a>  </li>
<li><a href="#Regularization">Regularization</a>  </li>
</ol>
</li>
<li><a href="#Advice">Advice</a>  </li>
</ol>
<h3 id="What-is-Machine-Learning"><a href="#What-is-Machine-Learning" class="headerlink" title="What is Machine Learning?"></a>What is Machine Learning?</h3><p><a href="#Catalog">Catalog</a>  </p>
<blockquote>
<p>DEFINITION:”A computer program is said to learn from experience E respect to some class of tasks T and performance measure P, if its performance at task in T, as measure by P, improves with experience E.”  </p>
<blockquote>
<p>Example: playing checkers.<br>E = the experience of playing many games of checkers.<br>T = the task of playing checkers.<br>P = the probability that the program will win the next time.  </p>
</blockquote>
<p>In general, any machine learning can be assigned to one of two broad classifications:<br>Supervised learning and Unsupervised learning.<br>Supervised learning:</p>
<blockquote>
<p>Regression: Given a picture of a person, we have to predict their age on the basis of the given picture<br>Classification: Given a patient with a tumor, we have to predict whether the tumor is malignant or benign.  </p>
</blockquote>
<p>Unsupervised learning:  </p>
<blockquote>
<p>Clustering: Take a collection of 1,000,000 different genes, and find a way to automatically group these genes into groups that are somehow similar or related by different variables, such as lifespan, location, roles, and so on.<br>Non-clustering: The “Cocktail Party Algorithm”, allows you to find structure in a chaotic environment. (i.e. identifying individual voices and music from a mesh of sounds at a cocktail party).  </p>
</blockquote>
</blockquote>
<h3 id="Model-Representation"><a href="#Model-Representation" class="headerlink" title="Model Representation"></a>Model Representation</h3><p><a href="#Catalog">Catalog</a>  </p>
<blockquote>
<p><img src="/img/ml/module.png" alt="/img/ml/module.png">  </p>
</blockquote>
<h3 id="Supervised-Learning"><a href="#Supervised-Learning" class="headerlink" title="Supervised Learning"></a>Supervised Learning</h3><p><a href="#Catalog">Catalog</a>  </p>
<h4 id="Linear-Regression"><a href="#Linear-Regression" class="headerlink" title="Linear Regression"></a>Linear Regression</h4><blockquote>
<p><em>Cost Function</em>  </p>
<blockquote>
<p><img src="/img/ml/hthetaL.png" alt="/img/ml/hthetaL.png"><br><img src="/img/ml/costf.png" alt="/img/ml/costf.png">  </p>
</blockquote>
<p><em>Gradient Descent</em>   </p>
<blockquote>
<p><img src="/img/ml/singlegrid.png" alt="/img/ml/singlegrid.png"><br><img src="/img/ml/mulgrid.png" alt="/img/ml/mulgrid.png">  </p>
<ul>
<li>For sufficiently small α, J(θ) should decrease on every iteration.  </li>
<li>But if α is too small, gradient descent can be slow to converge.  </li>
</ul>
</blockquote>
<p><em>Multicle</em>  </p>
<blockquote>
<p>same  </p>
</blockquote>
</blockquote>
<h4 id="Logistic-Regression"><a href="#Logistic-Regression" class="headerlink" title="Logistic Regression"></a>Logistic Regression</h4><blockquote>
<p><em>Cost Function</em>  </p>
<blockquote>
<p><img src="/img/ml/hthetaC.png" alt="/img/ml/hthetaC.png"><br><img src="/img/ml/hthetaC3.png" alt="/img/ml/hthetaC3.png"><br><img src="/img/ml/hthetaC2.png" alt="/img/ml/hthetaC2.png"><br><img src="/img/ml/hthetaC4.png" alt="/img/ml/hthetaC4.png"><br><img src="/img/ml/hthetaC5.png" alt="/img/ml/hthetaC5.png"><br>We can fully write out entire cost function as follows:<br><img src="/img/ml/hthetaC6.png" alt="/img/ml/hthetaC6.png"><br>A vertorized implementation is:<br><img src="/img/ml/hthetaC7.png" alt="/img/ml/hthetaC7.png">  </p>
</blockquote>
<p><em>Decision Boundary</em>  </p>
<blockquote>
<p>In order to get our discrete 0 or 1 classification, we can translate the output of the hypothesis function as follows:<br><img src="/img/ml/decisionboundary.png" alt="/img/ml/decisionboundary.png">  </p>
</blockquote>
<p><em>Gradient Descent</em>  </p>
<blockquote>
<p>The general form of gradient descent is:<br><img src="/img/ml/gridC.png" alt="/img/ml/gridC.png"><br>We can work out the derivative part using calculus to get:<br><img src="/img/ml/gridC2.png" alt="/img/ml/gridC2.png"><br>A vectorized implementation is:<br><img src="/img/ml/gridC3.png" alt="/img/ml/gridC3.png">  </p>
</blockquote>
<p><em>Multicle</em>  </p>
<blockquote>
<p><img src="/img/ml/gridCmul1.png" alt="/img/ml/gridCmul1.png"><br><strong>To summarize:</strong>  </p>
<ul>
<li>Train a logistic regression classifier hθ(x) for each to predict the probability of y=i.  </li>
<li>To make a prediction on a new x, pick the class that maximizes hθ(x).  </li>
</ul>
</blockquote>
</blockquote>
<h4 id="Neural-Networks"><a href="#Neural-Networks" class="headerlink" title="Neural Networks"></a>Neural Networks</h4><blockquote>
<p><em>Model Representation</em>  </p>
<blockquote>
<p>Algorithms that try to mimic the brain. In our moudel, our dendrites are like the input features X1…Xn, and the output is the result of our hypothesis function. In this moudel our X0 input node is sometimes called “bais unit”. It is always eaqual to 1. In neural networks, we use the same logistic function as in classfication, we sometimes called it a sigmoid activation function.<br><img src="/img/ml/nnmodel1.png" alt="/img/ml/nnmodel1.png"><br><img src="/img/ml/nnmodel2.png" alt="/img/ml/nnmodel2.png">  </p>
</blockquote>
<p><em>Vectorized Implementation</em>  </p>
<blockquote>
<p><img src="/img/ml/nnvec1.png" alt="/img/ml/nnvec1.png"><br><img src="/img/ml/nnvec2.png" alt="/img/ml/nnvec2.png">  </p>
</blockquote>
<p><em>Examples of Application</em>  </p>
<blockquote>
<p>A simple example of applying neural networks is predicting X1 AND X2.<br>Theta(1) = [-30 20 20]<br><img src="/img/ml/nnapl1.png" alt="/img/ml/nnapl1.png"><br>And there we have the XNOR operator using a hidden layer with two nodes!<br><img src="/img/ml/nnapl2.png" alt="/img/ml/nnapl2.png">  </p>
</blockquote>
<p><em>Multiclass</em>  </p>
<blockquote>
<p><img src="/img/ml/nnmul1.png" alt="/img/ml/nnmul1.png">  </p>
</blockquote>
<p><em>Cost Function</em>  </p>
<blockquote>
<ul>
<li>L = total nummber of layers in network  </li>
<li>Sl = nummber of units(not counting bais unit)in layer l  </li>
<li>K = nummber of output classes<br><img src="/img/ml/nncost1.png" alt="/img/ml/nncost1.png">  </li>
</ul>
</blockquote>
<p><em>Backpropagation Algorithm</em>  </p>
<blockquote>
<p><img src="/img/ml/nnbackp1.png" alt="/img/ml/nnbackp1.png"><br><img src="/img/ml/nnbackp2.png" alt="/img/ml/nnbackp2.png">  </p>
</blockquote>
<p><em>Unrolling Parameters</em>  </p>
<blockquote>
<p><img src="/img/ml/nnunroll1.png" alt="/img/ml/nnunroll1.png"><br><img src="/img/ml/nnunroll2.png" alt="/img/ml/nnunroll2.png">  </p>
</blockquote>
<p><em>Gradient Checking</em>  </p>
<blockquote>
<p><img src="/img/ml/nncheck1.png" alt="/img/ml/nncheck1.png">  </p>
</blockquote>
<p><em>Random Initialization</em>  </p>
<blockquote>
<p><img src="/img/ml/nnrandom1.png" alt="/img/ml/nnrandom1.png">  </p>
</blockquote>
<p><em>Summarize</em>  </p>
<blockquote>
<p>First, pick a network architecture; choose the layout of your neural network, including how many units in each layer and how many layers in total you want to have.  </p>
<ul>
<li>Nummber of input units = dimension of features X(i)  </li>
<li>Nummber of output units = nummber of calsses  </li>
<li>Nummber of hidden units per layer = usually more the better  </li>
<li>Defaults: 1 hidden layer. If you have more than 1 hidden layer, then it id recommended that you have the same nummber of units in every didden layer.<br>Training a Neural Network  <blockquote>
<p>1.Randomly initialize the weights(Theta)<br>2.Implement forward propagation to get output for any input<br>3.Implement cost function<br>4.Implement backpropagation to compute partial derivatives<br>5.Use gradient Checking to confirm that your backpropagation works. Then disable gradient checking.<br>6.Use gradient descent or a built-in opatimization function to minimize the cost function with the weights in theta.  </p>
</blockquote>
</li>
</ul>
</blockquote>
</blockquote>
<h4 id="Support-Vector-Machines"><a href="#Support-Vector-Machines" class="headerlink" title="Support Vector Machines"></a>Support Vector Machines</h4><h3 id="Unsupervised-Learning"><a href="#Unsupervised-Learning" class="headerlink" title="Unsupervised Learning"></a>Unsupervised Learning</h3><p><a href="#Catalog">Catalog</a>  </p>
<h4 id="Clustering"><a href="#Clustering" class="headerlink" title="Clustering"></a>Clustering</h4><blockquote>
<p>K-Means  </p>
<blockquote>
<p><em>K-means algorithm</em><br>Randomly initialize K cluster centroids<br>Repeat{<br>&emsp;for i=1 to m<br>&emsp;&emsp;index = index(from 1 to K) of cluster centroid cloesest to X(i)<br>&emsp;for k=1 to K<br>&emsp;&emsp;centroids = average of points assigned to cluster k  </p>
</blockquote>
<blockquote>
<p><img src="/img/ml/kmeans1.png" alt="/img/ml/kmeans1.png"><br><img src="/img/ml/kmeans2.png" alt="/img/ml/kmeans2.png">  </p>
</blockquote>
</blockquote>
<h4 id="Dimensionality-Reduction"><a href="#Dimensionality-Reduction" class="headerlink" title="Dimensionality Reduction"></a>Dimensionality Reduction</h4><blockquote>
<p>Principal Conponent Analysis  </p>
</blockquote>
<h4 id="Anomaly-Detection"><a href="#Anomaly-Detection" class="headerlink" title="Anomaly Detection"></a>Anomaly Detection</h4><blockquote>
<p>Gaussian Distribution  </p>
</blockquote>
<h4 id="Recommender-System"><a href="#Recommender-System" class="headerlink" title="Recommender System"></a>Recommender System</h4><blockquote>
<p>Collaborative Filtering  </p>
</blockquote>
<h3 id="The-Operation-Of-Optimize"><a href="#The-Operation-Of-Optimize" class="headerlink" title="The Operation Of Optimize"></a>The Operation Of Optimize</h3><p><a href="#Catalog">Catalog</a>  </p>
<h5 id="Feature-Scaling"><a href="#Feature-Scaling" class="headerlink" title="Feature Scaling"></a>Feature Scaling</h5><blockquote>
<p><img src="/img/ml/FeatureScaling.png" alt="/img/ml/FeatureScaling.png"><br>Where Ui is the <strong>average</strong> of all the values for feature(i) and Si is the <strong>range</strong> of values(max-min), or Si is the <strong>standard deviation</strong>.  </p>
</blockquote>
<h5 id="Normal-Equation"><a href="#Normal-Equation" class="headerlink" title="Normal Equation"></a>Normal Equation</h5><blockquote>
<p><img src="/img/ml/NormalEquation.png" alt="/img/ml/NormalEquation.png"><br>The following is a comparison of gradient descent and normal equation  </p>
<table>
<thead>
<tr>
<th>Gradient Descent</th>
<th>Normal Equation  </th>
</tr>
</thead>
<tbody>
<tr>
<td>Need to choose alpha</td>
<td>No need to choose alpha  </td>
</tr>
<tr>
<td>Need many iterations</td>
<td>No need to iterate  </td>
</tr>
<tr>
<td>O(kn²)</td>
<td>O(n³), need to calculate inverse of X’X  </td>
</tr>
<tr>
<td>Works well when n is large</td>
<td>Slow if n is very large  </td>
</tr>
</tbody>
</table>
<p>If X’X is <strong>noninvertible</strong>, the commom causes might be having:  </p>
<ul>
<li>Redundant features, where two features are very closely related(i.e. they are linearly dependent)  </li>
<li>Too many features(e.g. m ≤ n). In this case, delete some features or use “regularization”.  </li>
</ul>
</blockquote>
<h5 id="Advanced-Optimization"><a href="#Advanced-Optimization" class="headerlink" title="Advanced Optimization"></a>Advanced Optimization</h5><blockquote>
<p>For linear and logistic</p>
<pre><code class="octave">function [jVal, gradient] = costFunction(theta)
  jVal = [...code to compute J(theta)...];
  gradient = [...code to compute derivative of J(theta)...];
end
options = optimset(&apos;GradObj&apos;, &apos;on&apos;, &apos;MaxIter&apos;, 100);
initialTheta = zeros(2,1);
   [optTheta, functionVal, exitFlag] = fminunc(@costFunction, initialTheta, options);
</code></pre>
</blockquote>
<h5 id="Regularization"><a href="#Regularization" class="headerlink" title="Regularization"></a>Regularization</h5><blockquote>
<p>We will modify your gradient descent function to separate out θ(0) from the rest of parameters because we do not want to penalize θ(0).  </p>
<p><em>Regularized Linear Regression</em>  </p>
<blockquote>
<p><img src="/img/ml/reglinearcost.png" alt="/img/ml/reglinearcost.png"><br><img src="/img/ml/reglineargrid.png" alt="/img/ml/reglineargrid.png">  </p>
</blockquote>
<p><em>Regularized Logistic Regression</em>  </p>
<blockquote>
<p><img src="/img/ml/reglogisticcost.png" alt="/img/ml/reglogisticcost.png"><br><img src="/img/ml/reglogisticgrid.png" alt="/img/ml/reglogisticgrid.png">  </p>
</blockquote>
<p><em>Regularized Normal Equation</em>  </p>
<blockquote>
<p><img src="/img/ml/regnormalequ.png" alt="/img/ml/regnormalequ.png">  </p>
</blockquote>
</blockquote>
<h3 id="Advice"><a href="#Advice" class="headerlink" title="Advice"></a>Advice</h3><p><a href="#Catalog">Catalog</a>  </p>

            <!--[if lt IE 9]><script>document.createElement('audio');</script><![endif]-->
            <audio id="audio" loop="1" preload="auto" controls="controls" data-autoplay="true">
                <source type="audio/mpeg" src="/music/ml.mp3">
            </audio>
            
        </div>
        
    <div id="gitalk-container" class="comment link" data-ae="false" data-ci="45ef512b417cb9d1ff22" data-cs="6204f7ebcddcae66cfa96a578ec883f4da0edcd3" data-r="zwCDcc.github.io" data-o="zwCDcc" data-a="zwCDcc" data-d="false">查看评论</div>


    </div>
    
</div>


    </div>
</div>
</body>
<script src="//cdn.jsdelivr.net/npm/gitalk@1/dist/gitalk.min.js"></script>
<script src="//lib.baomitu.com/jquery/1.8.3/jquery.min.js"></script>
<script src="/js/plugin.js"></script>
<script src="/js/diaspora.js"></script>
<link rel="stylesheet" href="/photoswipe/photoswipe.css">
<link rel="stylesheet" href="/photoswipe/default-skin/default-skin.css">
<script src="/photoswipe/photoswipe.min.js"></script>
<script src="/photoswipe/photoswipe-ui-default.min.js"></script>

<!-- Root element of PhotoSwipe. Must have class pswp. -->
<div class="pswp" tabindex="-1" role="dialog" aria-hidden="true">
    <!-- Background of PhotoSwipe. 
         It's a separate element as animating opacity is faster than rgba(). -->
    <div class="pswp__bg"></div>
    <!-- Slides wrapper with overflow:hidden. -->
    <div class="pswp__scroll-wrap">
        <!-- Container that holds slides. 
            PhotoSwipe keeps only 3 of them in the DOM to save memory.
            Don't modify these 3 pswp__item elements, data is added later on. -->
        <div class="pswp__container">
            <div class="pswp__item"></div>
            <div class="pswp__item"></div>
            <div class="pswp__item"></div>
        </div>
        <!-- Default (PhotoSwipeUI_Default) interface on top of sliding area. Can be changed. -->
        <div class="pswp__ui pswp__ui--hidden">
            <div class="pswp__top-bar">
                <!--  Controls are self-explanatory. Order can be changed. -->
                <div class="pswp__counter"></div>
                <button class="pswp__button pswp__button--close" title="Close (Esc)"></button>
                <button class="pswp__button pswp__button--share" title="Share"></button>
                <button class="pswp__button pswp__button--fs" title="Toggle fullscreen"></button>
                <button class="pswp__button pswp__button--zoom" title="Zoom in/out"></button>
                <!-- Preloader demo http://codepen.io/dimsemenov/pen/yyBWoR -->
                <!-- element will get class pswp__preloader--active when preloader is running -->
                <div class="pswp__preloader">
                    <div class="pswp__preloader__icn">
                      <div class="pswp__preloader__cut">
                        <div class="pswp__preloader__donut"></div>
                      </div>
                    </div>
                </div>
            </div>
            <div class="pswp__share-modal pswp__share-modal--hidden pswp__single-tap">
                <div class="pswp__share-tooltip"></div> 
            </div>
            <button class="pswp__button pswp__button--arrow--left" title="Previous (arrow left)">
            </button>
            <button class="pswp__button pswp__button--arrow--right" title="Next (arrow right)">
            </button>
            <div class="pswp__caption">
                <div class="pswp__caption__center"></div>
            </div>
        </div>
    </div>
</div>




</html>