<!DOCTYPE html>
<html>
<head>
<meta name="viewport"
	content="width=device-width,initial-scale=1,minimum-scale=1,maximum-scale=1,user-scalable=no" />
<meta charset="UTF-8">

<!-- tdk start -->
<title>聚类算法Kmeans 原理 python、mahout实现- 秋水的博客</title>
<meta name="Keywords" content="聚类算法Kmeans 原理 python、mahout实现" />
<meta name="Description" content="聚类算法Kmeans 原理 python、mahout实现" />
<!-- tdk end -->

<script
	src="http://cdn.bootcss.com/mathjax/2.7.0/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script>
<link rel="dns-prefetch" href="//cdn.mathjax.org" />

</head>
<link type="text/css" rel="stylesheet" href="../styles/main_text.css" />
<body>
	<div class="content">

		<!-- 导航栏 start-->
		<a href="../index.html">目录</a> > <a href="../ml.html">机器学习</a>

		<!-- 导航栏 end -->

		<h1>聚类算法Kmeans 原理 python、mahout实现</h1>
		<span class="myright">请尊重版权！ 如需转载，请联系<a href="#wechat">作者<a />。</span>
		<hr style="margin-bottom: 50px;" />

		<!-- 正文内容 start-->
		<div class="main_text">
			<h2>kmeans原理</h2>
			<p>
				K-means被广泛用于聚类。 该算法的最大优势在于<b style="color: blue">简洁和快速</b>，算法关键在于<b
					style="color: blue">初始中心的选择</b>和<b style="color: blue">距离公式</b>。
			</p>
			<p>若有数据集 s={o^1,o^2,o^3 ... o^n}，希望将其分为k (k1,k2 ...
				kn)个类别，则Kmeans可以描述为：</p>
			<blockquote>Kmeas 接受一个参数 k ，将指定的n个数据对象划分为 k个类别；
				这些类别会满足，若o1在类别k1中，则o1与类别k1中其他对象相之间的似度，高于o1与其他类别中的对象之间的相识度。相似度在kmeans中用一个对象到中心对象的距离来表示。
			</blockquote>
			<p>Kmeans的执行过程如下</p>
			<ol>
				<li>选择k个合适的对象，作为这k个类的初始中心</li>
				<li>在第x次迭代中，对任意一个样本，求其到k个中心的距离，并将该样本归到距离最近的中心所在的类</li>
				<li>利用均值等方法更新该类的中心值</li>
				<li>对于所有的k个聚类中心，如果执行2、3步骤后，值保持不变，则迭代结束，否则继续迭代</li>
			</ol>
			<h2>K-Means的收敛</h2>
			<p>K-Means是如何来保证收敛的呢？</p>
			<h3>目标函数</h3>
			<p>
				假设使用平方误差作为目标函数：<br /> <span>\(J(\mu_{1},\mu_{2},...,\mu_{k})
					= \frac{1}{2}\sum_{j=1}^{K}\sum_{i=1}^{N}(x_{i}-\mu_{j})^{2}\)</span>
			</p>
			<h3>E-Step</h3>
			<p>
				固定参数<span>\(\mu_{k}\)</span>, 将每个数据点分配到距离它本身最近的一个簇类中：
			</p>
			<p>\[ \gamma_{nk} = \begin{cases} 1, &amp; \text{if $k =
				argmin_{j}||x_{n}-\mu_{j}||^{2}$ } \\ 0, &amp; \text{otherwise}
				\end{cases} \]</p>
			<h3>M-Step</h3>
			<p>
				固定数据点的分配，更新参数（中心点）<span>\(\mu_{k}\)</span>:<br /> <span>\(\mu_{k}
					= \frac{\sum_{n}\gamma_{nk}x_{n}}{\sum_{n}\gamma_{nk}}\)</span>
			</p>
			<p>
				在E-step时，找到一个最逼近目标的函数<span>\(\gamma\)</span>；在M-step时，固定函数<span>\(\gamma\)</span>，更新均值<span>\(\mu\)</span>（找到当前函数下的最好的值）。所以一定会收敛
			</p>
			<h2>python实现kmeans</h2>
			<ol>
				<li>安装必要的库：<br /> sudo pip --default-timeout=100 -U install
					python-graph-core numpy scipy scikit-learn matplotlib
				</li>
				<li>代码实现
					<div class="code_area">
						<pre class="brush: python">

from numpy import *
import time
import matplotlib.pyplot as plt


# calculate Euclidean distance
def euclDistance(vector1, vector2):
    return sqrt(sum(power(vector2 - vector1, 2)))


# init centroids with random samples
def initCentroids(dataSet, k):
    numSamples, dim = dataSet.shape
    centroids = zeros((k, dim))
    for i in range(k):
        index = int(random.uniform(0, numSamples))
        centroids[i, :] = dataSet[index, :]
    return centroids


# k-means cluster
def kmeans(dataSet, k):
    numSamples = dataSet.shape[0]
    # first column stores which cluster this sample belongs to,
    # second column stores the error between this sample and its centroid
    clusterAssment = mat(zeros((numSamples, 2)))
    clusterChanged = True

    ## step 1: init centroids
    centroids = initCentroids(dataSet, k)
    print centroids
    while clusterChanged:
        clusterChanged = False
        ## for each sample
        for i in xrange(numSamples):
            minDist = 100000.0
            minIndex = 0
            ## for each centroid
            ## step 2: find the centroid who is closest
            for j in range(k):
                distance = euclDistance(centroids[j, :], dataSet[i, :])
                if distance &lt; minDist:
                    minDist = distance
                    minIndex = j

                    ## step 3: update its cluster
            if clusterAssment[i, 0] != minIndex:
                clusterChanged = True
                clusterAssment[i, :] = minIndex, minDist ** 2

                ## step 4: update centroids
        for j in range(k):
            pointsInCluster = dataSet[nonzero(clusterAssment[:, 0].A == j)[0]]
            centroids[j, :] = mean(pointsInCluster, axis=0)

    print 'Congratulations, cluster complete!'
    return centroids, clusterAssment


# show your cluster only available with 2-D data
def showCluster(dataSet, k, centroids, clusterAssment):
    numSamples, dim = dataSet.shape
    if dim != 2:
        print "Sorry! I can not draw because the dimension of your data is not 2!"
        return 1

    mark = ['or', 'ob', 'og', 'ok', '^r', '+r', 'sr', 'dr', '&lt;r', 'pr']
    if k &gt; len(mark):
        print "Sorry! Your k is too large! please contact Zouxy"
        return 1

        # draw all samples
    for i in xrange(numSamples):
        markIndex = int(clusterAssment[i, 0])
        plt.plot(dataSet[i, 0], dataSet[i, 1], mark[markIndex])

    mark = ['Dr', 'Db', 'Dg', 'Dk', '^b', '+b', 'sb', 'db', '&lt;b', 'pb']
    # draw the centroids
    for i in range(k):
        plt.plot(centroids[i, 0], centroids[i, 1], mark[i], markersize=12)

    plt.show()						
						

## step 1: load data
print "step 1: load data..."
dataSet = []
fileIn = open('/tmp/test.data')
for line in fileIn.readlines():
    lineArr = line.strip().split(' ')
    dataSet.append([float(lineArr[0]), float(lineArr[1])])

## step 2: clustering...
print "step 2: clustering..."
dataSet = mat(dataSet)
k = 4
centroids, clusterAssment = kmeans(dataSet, k)

## step 3: show the result
print "step 3: show the result..."
showCluster(dataSet, k, centroids, clusterAssment)						
						
						</pre>
					</div>
				</li>
				<li>运行结果如下图，我们可以看到，类似的点（通过颜色表示的）被很好的归为了一类<br /> <br />
					<div>
						<img width="500px;" src="../imgs/python_kmeans_01.png">
					</div></li>
				<li>程序解读</li>
			</ol>
			<h2>mahout的kmeans实现</h2>
			<h3>mahout中的向量化</h3>
			<p>对象被特征化以后，每一个对象都可以用一个向量表示。在mahout中，表示向量的的数据模型是Vector类。向量化以后的对象，可以使用距离来表示相似性。
				Vector有3种不同的实现：DenseVector、RandomAccessSparseVector和SequentialAccessSparseVector。他们有各自的特点</p>
			<ol>
				<li>DenseVector可被视为一个double型数组，其大小为数据中的特征个数。因为不管数组的元素之是不是0，数组中所有元素都被预先分配了空间。我们称之为密集的（dense)。</li>
				<li>RandomAccessSparseVector被实现为integer型和double型之间的一个HashMap，只有非零元素被分配空间。因此，这类向量被成为稀疏向量。</li>
				<li>SequentialAccessSparseVector实现为两个并列的数组，一个是integer型另一个是double型。其中只保留了非零元素。与面向随机访问的RandomAccessSparseVector不同，它是为顺序读取而优化的。
				</li>
			</ol>
			<p>用户可以根据自己算法的需求选择合适的向量实现类，如果算法需要很多随机访问，应该选择 DenseVector 或者
				RandomAccessSparseVector，如果大部分都是顺序访问，SequentialAccessVector 的效果会更好。</p>
			<h3>如何对数据进行向量化</h3>
			<p>简单来说，通常，我们需要将一下2种的数据进行向量化</p>
			<ol>
				<li>简单的整形或浮点型的数据<br />
					<p>这种数据最简单，只要将不同的域存在向量中即可，比如 n 维空间的点，其实本身可以被描述为一个向量。</p></li>
				<li>枚举类型数据
					<p>这类数据是对物体的描述，只是取值范围有限。举个例子，假设你有一个苹果信息的数据集，每个苹果的数据包括：大小，重量，颜色等，我们以颜色为例，设苹果的颜色数据包括：红色，黄色和绿色。在对数据进行建模时，我们可以用数字来表示颜色，红色
						=1，黄色 =2，绿色 =3，那么大小直径 8cm，重量 0.15kg，颜色是红色的苹果，建模的向量就是 &lt;8, 0.15,
						1&gt;。</p>
				</li>
			</ol>
			<p>下面的代码清单 给出了对以上两种数据进行向量化的例子。</p>
			<div class="code_area">
				<span><span style="color: red">[程序清单02] 创建一个二维点集的向量组</span></span>
				<pre class="brush: java">

public static final double[][] points = { { 1, 1 }, { 2, 1 }, { 1, 2 }, { 2, 2 }, { 3, 3 }, { 8, 8 }, { 9, 8 },
		{ 8, 9 }, { 9, 9 }, { 5, 5 }, { 5, 6 }, { 6, 6 } };

public static List&lt;Vector&gt; getPointVectors(double[][] raw) {
	List&lt;Vector&gt; points = new ArrayList&lt;Vector&gt;();
	for (int i = 0; i &lt; raw.length; i++) {
		double[] fr = raw[i];
		// 这里选择创建 RandomAccessSparseVector
		Vector vec = new RandomAccessSparseVector(fr.length);
		// 将数据存放在创建的 Vector 中
		vec.assign(fr);
		points.add(vec);
	}
	return points;
}
			</pre>
			</div>
			<br />
			<div class="code_area">
				<span><span style="color: red">[程序清单03] 创建苹果信息数据的向量组</span></span>
				<pre class="brush: java">
public static List&lt;Vector&gt; generateAppleData() {
	List&lt;Vector&gt; apples = new ArrayList&lt;Vector&gt;();
	// 这里创建的是 NamedVector，其实就是在上面几种 Vector 的基础上，
	// 为每个 Vector 提供一个可读的名字
	NamedVector apple = new NamedVector(new DenseVector(new double[] { 0.11, 510, 1 }), "Small round green apple");
	apples.add(apple);
	apple = new NamedVector(new DenseVector(new double[] { 0.2, 650, 3 }), "Large oval red apple");
	apples.add(apple);
	apple = new NamedVector(new DenseVector(new double[] { 0.09, 630, 1 }), "Small elongated red apple");
	apples.add(apple);
	apple = new NamedVector(new DenseVector(new double[] { 0.25, 590, 3 }), "Large round yellow apple");
	apples.add(apple);
	apple = new NamedVector(new DenseVector(new double[] { 0.18, 520, 2 }), "Medium oval green apple");
	apples.add(apple);
	return apples;
}</pre>
			</div>

			<h3>使用mahout进行聚类的例子</h3>
			<p>
				这是一个单机的伪分布式hadoop集群上，使用mahout进行聚类的例子。你可以使用maven构建 <a
					href="./pom.xml">点击查看pom文件</a>
			</p>
			<div class="code_area">
				<span><span style="color: red">[程序清单04] hdfs工具类</span></span>
				<pre class="brush: java">
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
FileSystem fs = null;

public HdfsUtils(Configuration conf) {
	try {
		fs = FileSystem.get(conf);
	} catch (IOException e) {
		e.printStackTrace();
	}
}

public void mkdirs(String folder) throws IOException {
	Path path = new Path(folder);
	if (!fs.exists(path)) {
		fs.mkdirs(path);
		System.out.println("Create: " + folder);
	}
	fs.close();
}

public void rmr(String folder) throws IOException {
	Path path = new Path(folder);
	fs.deleteOnExit(path);
	System.out.println("Delete: " + folder);
	fs.close();
}

public void ls(String folder) throws IOException {
	Path path = new Path(folder);
	FileStatus[] list = fs.listStatus(path);
	System.out.println("ls: " + folder);
	System.out.println("==========================================================");
	for (FileStatus f : list) {
		System.out.printf("name: %s, folder: %s, size: %d\n", f.getPath(), f.isDirectory(), f.getLen());
	}
	System.out.println("==========================================================");
	fs.close();
}

public void createFile(String file, String content) throws IOException {
	byte[] buff = content.getBytes();
	FSDataOutputStream os = null;
	try {
		os = fs.create(new Path(file));
		os.write(buff, 0, buff.length);
		System.out.println("Create: " + file);
	} finally {
		if (os != null)
			os.close();
	}
	fs.close();
}

public void copyFile(String local, String remote) throws IOException {
	fs.copyFromLocalFile(new Path(local), new Path(remote));
	System.out.println("copy from: " + local + " to " + remote);
	fs.close();
}
				</pre>
			</div>
			<br />
			<div class="code_area">
				<span><span style="color: red">[程序清单05] 执行mahout聚类</span></span>
				<pre class="brush: java">
public static void main(String[] args) throws Exception {
	String localFile = "/tmp/randomData.csv";
	String inPath = "/tmp/mahout_data";
	String seqFile = inPath + "/seqfile";
	String seeds = inPath + "/seeds";
	String outPath = inPath + "/result/";
	String clusteredPoints = outPath + "/clusteredPoints";

	Configuration conf = new Configuration();
	HdfsUtils hdfs = new HdfsUtils(conf);
	hdfs.rmr(inPath);
	hdfs.mkdirs(inPath);
	hdfs.copyFile(localFile, inPath);
	hdfs.ls(inPath);

	InputDriver.runJob(new Path(inPath), new Path(seqFile), "org.apache.mahout.math.RandomAccessSparseVector");

	int k = 3;
	Path seqFilePath = new Path(seqFile);
	Path clustersSeeds = new Path(seeds);

	// 选择质心
	DistanceMeasure measure = new EuclideanDistanceMeasure();
	clustersSeeds = RandomSeedGenerator.buildRandom(conf, seqFilePath, clustersSeeds, k, measure);
	// kmeans聚类
	KMeansDriver.run(conf, seqFilePath, clustersSeeds, new Path(outPath), 0.01, 10, true, 0.01, false);

	// dump
	Path outGlobPath = new Path(outPath, "clusters-*-final");
	Path clusteredPointsPath = new Path(clusteredPoints);
	System.out.printf("Dumping out clusters from clusters: %s and clusteredPoints: %s\n", outGlobPath,
			clusteredPointsPath);

	ClusterDumper clusterDumper = new ClusterDumper(outGlobPath, clusteredPointsPath);
	clusterDumper.printClusters(null);
}
				</pre>
			</div>
			<br />
			<div class="code_area">
				<span><span style="color: red">[程序清单06] randomData.csv</span></span>
				<a href="./randomData.txt">测试数据 randomData.csv</a>
			</div>
			<br />
			<p>本文自此结束：主要讲了kmeans的原理以及python和mahout的实现方式。</p>
			<!-- 正文内容 end-->

			<!--底部 start  -->

			<hr style="margin-top: 50px;" />
			<h2>相关阅读</h2>
			<a
				href="https://www.ibm.com/developerworks/cn/web/1103_zhaoct_recommstudy3/">深入推荐引擎相关算法
				- 聚类</a>
			<p>
				<a href="/tks.html">鸣谢</a>
			</p>
			<div style="margin-bottom: 50px;">
				<table>
					<tr>
						<td style="font-weight: 300" height="" width="200px">
							<div style="margin-top: 10px;">
								加好友 <img style="margin-top: 5px;" id="wechat" width="170px;"
									src="../imgs/wechat.png"> <br />扫描分享到朋友圈
								<p id="qrcode"
									style="margin-top: 5px; width: 185px; height: 188px; margin: 0;"></p>
								<script type="text/javascript" src="../scripts/qrcode.min.js">
									
								</script>
								<script type="text/javascript" src="../scripts/jquery.min.js"></script>
								<script type="text/javascript">
									var qrcode = new QRCode(document
											.getElementById("qrcode"), {
										width : 170,
										height : 170
									});

									function makeCode() {
										qrcode.makeCode(window.location.href);
									}
									makeCode();
								</script>

							</div>
						</td>
						<td>
							<div
								style="margin-top: 10px; margin-left: 100px; background: rgba(224, 224, 224, 0.22); border: 1px solid rgba(224, 224, 224, 0.22);">
								<div style="margin: 20px;">
									<p style="font-weight: 300">打赏
									<p style="color: blue">如果文中的技术在企业开发中，解决了你的问题，或者避免了你踩坑，请你付费观看，建议30元，但是这不是强制的。针对学生免费。


									
									<div>
										<img width="200px;" src="../imgs/we_pay.png"><img
											width="200px;" src="../imgs/ali_pay.png">

									</div>
								</div>
							</div>
						</td>
					</tr>
					<tr>

					</tr>
				</table>
			</div>
			<!--底部 end  -->
		</div>
</body>

<script type="text/javascript" src="../scripts/XRegExp.js"></script>
<!-- XRegExp is bundled with the final shCore.js during build -->
<script type="text/javascript" src="../scripts/shCore.js"></script>
<script type="text/javascript" src="../scripts/shBrushJScript.js"></script>
<script type="text/javascript" src="../scripts/shBrushPython.js"></script>
<script type="text/javascript" src="../scripts/shBrushJava.js"></script>
<link type="text/css" rel="stylesheet" href="../styles/shCore.css" />
<link type="text/css" rel="Stylesheet"
	href="../styles/shThemeDefault.css" />
<link type="text/css" rel="Stylesheet" href="../styles/main_text.css" />
<script type="text/javascript">
	//SyntaxHighlighter.all();
</script>

</html>
<script>
	function IsPC() {
		var userAgentInfo = navigator.userAgent;
		var Agents = [ "Android", "iPhone", "SymbianOS", "Windows Phone",
				"iPod" ];
		var flag = true;
		for (var v = 0; v < Agents.length; v++) {
			if (userAgentInfo.indexOf(Agents[v]) > 0) {
				flag = false;
				break;
			}
		}
		if (window.screen.width >= 768) {
			flag = true;
		}
		return flag;
	}
	if (!IsPC()) {
		$(".content").css("width", "100%");
		$("img").css("width", "100%");
		$("pre").css("width", "100%");
		$("pre").css("overflow", "auto");
		/**
		$("pre").css("white-space", "pre-wrap");
		$("pre").css("word-wrap", "break-word");
		 **/

	} else {
		SyntaxHighlighter.all();
	}
</script>