{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "导入头文件//\n",
    "引用数据集//\n",
    "分割数据集//"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import math\n",
    "from numpy import *\n",
    "from numpy import *\n",
    "import matplotlib.pyplot as plt\n",
    "\n",
    "from sklearn.datasets import load_boston\n",
    "boston = load_boston()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 定义辅助计算函数:"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "##### 1.叶节点求均值;"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "def regLeaf(dataSet):\n",
    "    return mean(dataSet[:,-1])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "##### 2.计算样本空间在某一类属性上的方差, \n",
    "##### 因为输出空间不是离散值,而是连续值.\n",
    "##### 所以用方差代替基尼指数衡量样本的纯度."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "#var()：表示方差 即各项-均值的平方求和之后再除以N 此处返回总方差，所以乘个数\n",
    "def regErr(dataSet):\n",
    "    return var(dataSet[:,-1])*shape(dataSet)[0]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "##### 3.定义均方差函数,用于衡量训练以及预测结果的好坏"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "def err(data_set, node):\n",
    "    sum_err = 0\n",
    "    for i in range(data_set.shape[0]):\n",
    "          err = data_set[i, -1] - predict(node, data_set[i])\n",
    "          sum_err += math.pow(err, 2)\n",
    "    return sum_err / float(data_set.shape[0])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "##### 定义最优属性, 以及最优切分点\n",
    "##### 设置停止策略, 因为对于连续值输出, 输出结果是不确定的, 所以我们设定一个阈值,作为递归建树的终止策略\n",
    "##### 返回最优和最优划分点"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "#找到数据的最佳二元切分方式，返回特征编号和切分特征值\n",
    "#停止条件tolS对误差的数量级十分敏感\n",
    "def chooseBestSplit(dataSet,leafType=regLeaf,errType=regErr,ops=(1,4)):\n",
    "    #tolS,tolN用来控制函数停止的时机\n",
    "    tolS=ops[0] #容许的误差下降值\n",
    "    tolN=ops[1] #切分的最小样本数\n",
    "    #如果所有特征值相同则退出\n",
    "#     if len(set(dataSet[:,-1].T.tolist()[0]))==1:\n",
    "#         return None,leafType(dataSet)\n",
    "    m,n=shape(dataSet)\n",
    "    #默认最后一个特征为最佳切分特征，计算误差估计\n",
    "    S=errType(dataSet)\n",
    "    #最佳误差，最佳特征切分索引值，最佳特征值\n",
    "    bestS=inf;bestIndex=0;bestValue=0\n",
    "    #遍历特征列\n",
    "    for featIndex in range(n-1):\n",
    "        #遍历特征值\n",
    "        for splitVal in set(dataSet[:,featIndex].T.A.tolist()[0]):\n",
    "            mat0,mat1=binSplitDataSet(dataSet,featIndex,splitVal)\n",
    "            #如果数据少于tolN,则退出\n",
    "            if(shape(mat0)[0]<tolN)or (shape(mat1)[0]<tolN):continue\n",
    "            #计算误差估计\n",
    "            newS=errType(mat0)+ errType(mat1)\n",
    "            if newS<bestS:\n",
    "                bestIndex=featIndex\n",
    "                bestValue=splitVal\n",
    "                bestS=newS\n",
    "    #如果切分后误差减少不够大，则不应进行切分而直接创建叶节点\n",
    "    if(S-bestS)<tolS:\n",
    "        return None,leafType(dataSet)\n",
    "    mat0,mat1=binSplitDataSet(dataSet,bestIndex,bestValue)\n",
    "    #如果切分出的数据集很小则退出\n",
    "    if(shape(mat0)[0]<tolN) or (shape(mat1)[0]<tolN):\n",
    "        return None,leafType(dataSet)\n",
    "    return bestIndex,bestValue"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "##### 分割数据集"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "def binSplitDataSet(dataSet,feature,value):\n",
    "    #nonzero(dataSet[:,feature]>value)[0] 取得第feature特征列中大于value且不为零的元素行坐标\n",
    "    mat0=dataSet[nonzero(dataSet[:,feature]>value)[0],:]\n",
    "    mat1=dataSet[nonzero(dataSet[:,feature]<=value)[0],:]\n",
    "    return mat0,mat1"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "##### 生成树\n",
    "##### 递归生成\n",
    "##### 节点有四个属性:\n",
    "##### 切割属性, 切割值, 左子树, 右子树\n",
    "##### 其中叶节点的切割值即最终预测值, 其余属性置为None"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "def createTree(dataSet,leafType=regLeaf,errType=regErr,ops=(1,4)):\n",
    "    feat,val=chooseBestSplit(dataSet,leafType,errType,ops)\n",
    "    retTree={}\n",
    "    if feat==None:\n",
    "        retTree['spInd'] = None\n",
    "        retTree['spVal'] = val\n",
    "        retTree['left'] = None\n",
    "        retTree['right'] = None\n",
    "        return retTree\n",
    "    retTree['spInd']=feat\n",
    "    retTree['spVal']=val\n",
    "    lSet,rSet=binSplitDataSet(dataSet,feat,val)\n",
    "    retTree['left']=createTree(lSet,leafType,errType,ops)\n",
    "    retTree['right']=createTree(rSet,leafType,errType,ops)\n",
    "    return retTree"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "##### 预测函数,进行测试"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "def predict(node, sample):\n",
    "    if node['spInd'] == None: return node['spVal']\n",
    "    elif sample[0,node['spInd']] > node['spVal']: return predict(node['left'], sample)\n",
    "    else : return predict(node['right'], sample)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "###### 运行函数"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "myMat = mat(boston.data)\n",
    "\n",
    "train_data = myMat[:300, :]\n",
    "test_data = myMat[300:, :]\n",
    "root_node = createTree(train_data, leafType=regLeaf, errType=regErr, ops=(1,4));\n",
    "\n",
    "train_err = err(train_data, root_node)\n",
    "test_err = err(test_data, root_node)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "输出结果: 506个样本的均方差"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "训练集均方差:3.52421149761905\n",
      "测试集均方差:47.690153257306314\n"
     ]
    }
   ],
   "source": [
    "print('训练集均方差:{}'.format(train_err))\n",
    "print('测试集均方差:{}'.format(test_err))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "##### 结果反思:\n",
    "###### 1.问题总结:\n",
    "###### CART: classification and regression tree, 回归问题与分类不同最主要在于, 回归的输出空间是连续的, 例如房价预测等, 而分类树的输出空间是离散的.\n",
    "###### 在算法上有几处不同, 对于回归树, 设定一个阈值方差代表样本集的纯度,因为,传统的基尼指数已经不适用, 对于分类树,可以用基尼指数, 信息熵等来表示.\n",
    "###### 在停止策略上, 每次划分后,不需要删除属性集合的某个属性, 属性可能作为切分点.\n",
    "###### 2.对于python不熟悉\n",
    "###### ①: pyhton用字典来表示树形这一个数据结构, 很巧妙.\n",
    "###### ①: 数据类型之间的相互转换: 例如numpy数组, 矩阵, 以及列表的不同于区别\n",
    "###### ②: 对于数组, 矩阵, 以及列表的操作不熟悉.\n",
    "###### 不足: 由于时间原因, 没有进行后剪枝处理, 因此测试集的方差较训练集较大"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.1"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
