<html>
<head>
  <title>Unsupervised learning: seeking representations of the data</title>
  <basefont face="微软雅黑" size="2" />
  <meta http-equiv="Content-Type" content="text/html;charset=utf-8" />
  <meta name="exporter-version" content="Evernote Windows/307027 (zh-CN, DDL); Windows/6.1.0 (Win32);"/>
  <style>
    body, td {
      font-family: 微软雅黑;
      font-size: 10pt;
    }
  </style>
</head>
<body>
<a name="871"/>
<h1>Unsupervised learning: seeking representations of the data</h1>

<div>
<span><div><div><div><h2><span style="font-size: 18pt; color: rgb(28, 51, 135); font-weight: normal;">目录：</span></h2><ul><li style="cursor: pointer; background-color: white;"><a href="http://scikit-learn.org/stable/modules/mixture.html">2.1. Gaussian mixture models</a></li><li style="cursor: pointer; background-color: white;"><a href="http://scikit-learn.org/stable/modules/manifold.html">2.2. Manifold learning</a></li><li style="cursor: pointer; background-color: white;"><a href="http://scikit-learn.org/stable/modules/clustering.html">2.3. Clustering</a></li><li style="cursor: pointer; background-color: white;"><a href="http://scikit-learn.org/stable/modules/biclustering.html">2.4. Biclustering</a></li><li style="cursor: pointer; background-color: white;"><a href="http://scikit-learn.org/stable/modules/decomposition.html">2.5. Decomposing signals in components (matrix factorization problems)</a></li><li style="cursor: pointer; background-color: white;"><a href="http://scikit-learn.org/stable/modules/covariance.html">2.6. Covariance estimation</a></li><li style="cursor: pointer; background-color: white;"><a href="http://scikit-learn.org/stable/modules/outlier_detection.html">2.7. Novelty and Outlier Detection</a></li><li style="cursor: pointer; background-color: rgb(208, 208, 208);"><a href="http://scikit-learn.org/stable/modules/density.html">2.8. Density Estimation</a></li><li style="cursor: pointer; background-color: white;"><a href="http://scikit-learn.org/stable/modules/neural_networks_unsupervised.html">2.9. Neural network models (unsupervised)</a></li></ul><div><br/></div><div><br/></div><div><br/></div><h2><span style="font-size: 18pt; color: rgb(28, 51, 135); font-weight: normal;">Clustering: grouping observations together</span></h2></div><div>        Given the iris dataset, if we knew that there were 3 types of iris, but <font style="font-size: 12pt;"><span style="color: rgb(173, 0, 0); font-size: 12pt; font-weight: bold;">did not have access to a taxonomist to label them</span></font>: we could try a <span style="font-weight: bold;">clustering task</span>: split the observations into <font style="font-size: 12pt;"><span style="color: rgb(173, 0, 0); font-size: 12pt; font-weight: bold;">well-separated group called</span> <span style="color: rgb(173, 0, 0); font-size: 12pt; font-weight: bold; font-style: italic;">clusters</span></font>.</div><div><br/></div><h3>K-means clustering</h3><div><font style="font-size: 14pt;"><span style="font-size: 14pt; font-weight: bold;">Warning</span></font></div><div>There is absolutely no guarantee of recovering a ground truth. </div><div><ul><li><span style="line-height: 1.45;">First, choosing the right number of clusters is hard.</span></li><li><span style="line-height: 1.45;"> Second, the algorithm is sensitive to initialization, and can fall into local minima, although scikit-learn employs several tricks to mitigate this issue.</span></li></ul></div><div><br/></div><h3>Hierarchical agglomerative clustering: Ward</h3><div>        A <a href="http://scikit-learn.org/stable/modules/clustering.html#hierarchical-clustering">Hierarchical clustering</a> method is a type of cluster analysis that aims to build a hierarchy of clusters. In general, the various approaches of this technique are either</div><blockquote><ul><li><span style="font-weight: bold;">Agglomerative</span> - bottom-up approaches: each observation starts in its own cluster, and clusters are iterativelly merged in such a way to minimize a <span style="font-style: italic;">linkage</span> criterion. This approach is particularly interesting when the clusters of interest are made of only a few observations. When the number of clusters is large, it is much more computationally efficient than k-means.</li><li><span style="font-weight: bold;">Divisive</span> - top-down approaches: all observations start in one cluster, which is iteratively split as one moves down the hierarchy. For estimating large numbers of clusters, this approach is both slow (due to all observations starting as one cluster, which it splits recursively) and statistically ill-posed.</li></ul></blockquote><div><br/></div><ul><li><span style="line-height: 1.45;">Connectivity-constrained clustering</span></li><li><span style="line-height: 1.45;">Feature agglomeration</span></li></ul><h3><span style="color: rgb(28, 51, 135); font-size: 18pt; font-weight: normal;">Principal component analysis: PCA</span></h3><div><a href="http://scikit-learn.org/stable/modules/decomposition.html#pca">Principal component analysis (PCA)</a> selects the successive components that explain the maximum variance in the signal.</div><div style="margin-top: 1em; margin-bottom: 1em;"><a href="http://scikit-learn.org/stable/auto_examples/decomposition/plot_pca_3d.html"><img src="Unsupervised learning seeking representations_files/Image.png" type="image/png" data-filename="Image.png" style="width: 280.0px; height: 210.0px;"/></a> <a href="http://scikit-learn.org/stable/auto_examples/decomposition/plot_pca_3d.html"><img src="Unsupervised learning seeking representations_files/Image [1].png" type="image/png" data-filename="Image.png" style="width: 280.0px; height: 210.0px;"/></a></div><div>The point cloud spanned by the observations above is very flat in one direction: <span style="font-weight: bold;">one of the three univariate features can almost be exactly computed using the other two</span>. <font style="font-size: 12pt;"><span style="color: rgb(173, 0, 0); font-size: 12pt; font-weight: bold;">PCA finds the directions in which the data is not</span> <span style="color: rgb(173, 0, 0); font-size: 12pt; font-weight: bold; font-style: italic;">flat</span></font></div><div style="box-sizing: border-box; padding: 8px; font-size: 12px; border-top-left-radius: 4px; border-top-right-radius: 4px; border-bottom-right-radius: 4px; border-bottom-left-radius: 4px; background-color: rgb(251, 250, 248); border: 1px solid rgba(0, 0, 0, 0.14902);"><div><span style="font-size: 9pt; background-color: rgb(251, 250, 248); color: rgb(51, 51, 51); font-family: Monaco;">&gt;&gt;&gt; from sklearn import decomposition</span></div><div><span style="font-size: 9pt; background-color: rgb(251, 250, 248); color: rgb(51, 51, 51); font-family: Monaco;">&gt;&gt;&gt; pca = decomposition.PCA()</span></div><div><span style="font-size: 9pt; background-color: rgb(251, 250, 248); color: rgb(51, 51, 51); font-family: Monaco;">&gt;&gt;&gt; pca.fit(X)</span></div><div><span style="font-size: 9pt; background-color: rgb(251, 250, 248); color: rgb(51, 51, 51); font-family: Monaco;">、PCA(copy=True, iterated_power='auto', n_components=None, random_state=None, svd_solver='auto', tol=0.0, whiten=False)</span></div><div><span style="font-size: 9pt; background-color: rgb(251, 250, 248); color: rgb(51, 51, 51); font-family: Monaco;">&gt;&gt;&gt; print(pca.explained_variance_)</span></div><div><span style="font-size: 9pt; background-color: rgb(251, 250, 248); color: rgb(51, 51, 51); font-family: Monaco;">[ 2.18565811e+00    1.19346747e+00    8.43026679e-32]</span></div></div><h3>Independent Component Analysis: ICA</h3><h3>Independent Component Analysis: ICA</h3><div style="margin-top: 1em; margin-bottom: 1em;"><a href="http://scikit-learn.org/stable/modules/decomposition.html#ica">Independent component analysis (ICA)</a> selects components so that the distribution of their loadings carries a maximum amount of independent information. It is able to recover <span style="font-weight: bold;">non-Gaussian</span> independent signals:</div><div><a href="http://scikit-learn.org/stable/auto_examples/decomposition/plot_ica_blind_source_separation.html"><img src="Unsupervised learning seeking representations_files/Image [2].png" type="image/png" data-filename="Image.png" style="width: 448.0px; height: 336.0px;"/></a></div><div><br/></div><div><br/></div><div><br/></div><div><br/></div><h1 style="text-align: center;"><span style="color: rgb(28, 51, 135); font-size: 24pt;">Putting it all together</span></h1><h2><span style="color: rgb(28, 51, 135); font-size: 18pt; font-weight: normal;">Pipelining</span></h2><div>We have seen that some estimators can transform data and that some estimators can predict variables. We can also create <font style="font-size: 12pt;"><span style="color: rgb(173, 0, 0); font-size: 12pt; font-weight: bold;">combined estimators</span></font></div><div><br/></div><div><br/></div><h2><span style="color: rgb(28, 51, 135); font-size: 18pt; font-weight: normal;">Face recognition with eigenfaces</span></h2><div><br/></div></div><div><br/></div><h1 style="text-align: center;"><font style="font-size: 24pt; color: rgb(28, 51, 135);">Gaussian mixture models</font></h1><div><span>    <span>    </span></span>混合高斯模型（Gaussian Mixture Model，简称GMM）是用高斯概率密度函数（正态分布曲线）精确地量化事物，将一个事物分解为若干的基于高斯概率密度函数（正态分布曲线）形成的模型。通俗点讲，无论观测数据集如何分布以及呈现何种规律，都可以通过多个单一高斯模型的混合进行拟合。</div><div><span style="-en-paragraph: true;"><span>    <span>    </span>最常见的单高斯模型（或者叫单高斯分布）就是钟形曲线，只不过钟形曲线只是一维下的高斯分布。高斯分布（Gaussian distribution）又叫正态分布（Normal distribution）</span></span></div><div style="text-align: center;"><img src="Unsupervised learning seeking representations_files/Image [3].png" type="image/png" data-filename="Image.png"/></div><div><br/></div><div><br/></div><div><span style="-en-paragraph: true;">它的基本定义是：若随机变量X服从一个数学期望为μ、方差为σ^2的高斯分布，则记为N(μ，σ^2)。数学期望μ指的是均值（算术平均值），σ为方标准差（方差开平方后得到标准差）。高斯分布的概率密度函数为：</span></div><div style="text-align: center; margin-top: 1em; margin-bottom: 1em;"><img src="Unsupervised learning seeking representations_files/Image [4].png" type="image/png" data-filename="Image.png"/><br/></div><div style="-en-paragraph: true; margin-top: 1em; margin-bottom: 1em;"><span style="-en-paragraph: true;">上述公式只是一维下的高斯分布模型，多维高斯分布模型下概率密度函数如下：</span></div><div style="text-align: center;"><img src="Unsupervised learning seeking representations_files/Image [5].png" type="image/png" data-filename="Image.png"/></div><div><br/></div><div><span>    <span>    </span></span>从几何形状上讲，单高斯分布模型在二维空间应该近似于椭圆（如本文最开始的图形），在三维空间上近似于椭球。但单高斯分布模型的问题是在很多分类问题中，属于同一类别的样本点并不满足“椭圆”分布的特性。因此，就需要引入高斯混合模型来解决这个问题。</div><div><font style="font-size: 24pt; color: rgb(50, 135, 18);"><b>理解：相当于往一条线、一个面、一个空间体上面扔豆子，豆子的分布情况</b></font></div><div><br/></div><div><br/></div><h2><font style="font-size: 18pt; font-weight: normal; color: rgb(28, 51, 135);">Gaussian Mixture</font></h2><div>GMM算法演示实例——使用GMM来做分类。</div><div><span style="-en-paragraph: true;"><font color="#AD0000" style="font-size: 12pt;"><b>混合高斯模型的应用场景包括：</b></font></span></div><ul><li><span style="font-family: Tahoma, Simsun;"><font color="#AD0000" style="font-size: 12pt;"><b>数据集分类，如会员分类；</b></font></span></li><li><font color="#AD0000" style="font-size: 12pt;"><b>图像分割以及以及特征抽取，例如在视频中跟踪人物以及区分动作，<span style="font-family: Tahoma, Simsun;">识别汽车、建筑物等</span>；</b></font></li><li><font color="#AD0000" style="font-size: 12pt;"><b>语音分割以及特征特征抽取，例如从一堆杂乱的声音中提取某个人的声音，从音乐中提取背景音乐，从大自然中提取地震的声音等。</b></font></li></ul><div><br/></div><h2><font style="font-size: 18pt; font-weight: normal; color: rgb(28, 51, 135);">Variational Bayesian Gaussian Mixture</font></h2><div><br/></div><div><br/></div><div><br/></div><div><br/></div><div><br/></div><div><br/></div></div></span>
</div></body></html> 