{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 交叉验证\n",
    "\n",
    "### 一、简介\n",
    "\n",
    "交叉验证，顾名思义，就是重复的使用数据，把得到的样本数据进行切分，组合为不同的训练集和测试集，用训练集来训练模型，用测试集来评估模型预测的好坏。\n",
    "\n",
    "一般会把数据分成三部分：**训练集、测试集、验证集**。用训练集来训练模型，用验证集来评估模型预测的好坏和选择模型及其对应的参数。把最终得到的模型再用于测试集，最终决定使用哪个模型以及对应参数。（1W条以上数据）\n",
    "\n",
    "### 二、常用方法\n",
    "\n",
    "#### 2.1 简单交叉验证（hold out）\n",
    "\n",
    "随机的将样本数据分为两部分（比如： 70%的训练集，30%的测试集），然后用训练集来训练模型，在测试集上验证模型及参数。\n",
    "\n",
    "多次随机划分、重复试验评估后取均值作为最终结果。\n",
    "\n",
    "#### 2.2 k折交叉验证(k-Folder Cross Validation)\n",
    "\n",
    "k折交叉验证通常把数据集D分为k份，其中的k-1份作为训练集，剩余的那一份作为测试集，这样就可以获得k组训练/测试集，可以进行k次训练与测试，最终返回的是k个测试结果的均值。（1W条样本）\n",
    "\n",
    "#### 2.3 自助法(bootstrapping)\n",
    "\n",
    "有放回重复采样的方式进行数据采样，即每次从数据集D中取一个样本作为训练集中的元素，然后把该样本放回，重复该行为m次，这样我们就可以得到大小为m的训练集$D'$，在这里面有的样本重复出现，有的样本则没有出现过，我们把那些没有出现过的样本$D-D'$作为测试集。（20个样本）\n",
    "\n",
    "#### 2.4补充概念\n",
    "\n",
    "除此之外，还有**留一交叉验证**（Leave-one-out Cross Validation），是k折交叉验证的一种特殊情况。（50个样本）\n",
    "\n",
    "模型的**超参数** 训练之前确定，如神经网络的层数，神经元个数等\n",
    "\n",
    "训练后的模型的**参数**  训练之后确定，如权重、偏置\n",
    "\n",
    "\n",
    "\n",
    "### 三、 交叉验证后，如何预测或评断，确定最终的模型\n",
    "\n",
    "- 用交叉验证中最好的某一折的参数与超参数？ 还是重新将全部数据进行训练，得出一个最终模型？ 参考:    [你真的了解交叉验证和过拟合吗？](https://www.cnblogs.com/solong1989/p/9415606.html)\n",
    "- 还有一种可能性，之前会预留一部分测试集不使用，确定模型参数后，把训练集、测试集都拿来做训练得到最终模型。\n",
    "\n",
    "这里参考两种回答：\n",
    "\n",
    "1. 我们进行10折交叉验证时，训练了10次，得到了10个模型，每个模型的参数也是不同的，那么我们究竟用哪个模型作为我们最终的模型呢？答案是：一个都不用！我们要利用全量数据重新训练出一个最终模型！\n",
    "\n",
    "   前期，我们训练模型是为了什么？很大程度是为了进行模型的择优，为了得到模型的最优的超参数。在模型的超参数确定下来了，利用全量的数据（不要划分训练集测试集，而是用全部的数据）进行模型训练，训练出来的模型才是我们的最终模型。\n",
    "\n",
    "   个人理解：这里的全部数据指训练集，或者指你用来做交叉验证的数据集，在应用中我们一般将训练集自动拆分成training set和validation set，即指这部分的所有数据。\n",
    "\n",
    "2. 如果我们取交叉验证中最好的那一个结果，交叉验证需要分一部分进行测试，那这样的话，不就没有充分利用所有信息吗？我的意思是：是不是需要在超参数和入模变量确定后再用全量数据训练一个模型作为最后的模型呢？\n",
    "\n",
    "   刘建平：一般不用，这样虽然没有充分使用所有的信息，但是却一定程度保证不会出现过拟合。\n",
    "\n",
    "### 四、代码练习验证\n",
    "\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "import pandas as pd\n",
    "from sklearn.model_selection import KFold# 按照顺序进行分类   shuffle会改变随机的组合\n",
    "from sklearn.model_selection import StratifiedKFold#按照Y值的不同类别进行分类   shuffle只是为了打乱顺序依然是不同类别\n",
    "from sklearn.model_selection import GroupKFold  #按照不同组别，不将所有的组列入训练集。这个组不是类别类别可以自己设定\n",
    "from sklearn.model_selection import train_test_split  #  切分训练集测试集"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 交叉验证，获取每折的结果"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 39,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "2"
      ]
     },
     "execution_count": 39,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "X= np.array([[1, 2], [3, 4], [1, 2], [3, 4]])\n",
    "y= np.array([0, 0, 1, 1])\n",
    "kf  =  KFold(n_splits= 2, shuffle=True, random_state=10)\n",
    "kf.get_n_splits(X) #给出kf  =  KFold(n_splits= 10, shuffle=False, random_state=None)输出为2"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 40,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "train_index: [1 3] TEST: [0 2]\n",
      "[[3 4]\n",
      " [3 4]] [0 1]\n",
      "[[1 2]\n",
      " [1 2]] [0 1]\n",
      "train_index: [0 2] TEST: [1 3]\n",
      "[[1 2]\n",
      " [1 2]] [0 1]\n",
      "[[3 4]\n",
      " [3 4]] [0 1]\n"
     ]
    }
   ],
   "source": [
    "for train_index,test_index in kf.split(X):\n",
    "    print('train_index:',train_index,\"TEST:\", test_index)\n",
    "    X_train,X_test = X[train_index], X[test_index]\n",
    "    y_train,y_test = y[train_index], y[test_index]\n",
    "    print(X_train,y_train)\n",
    "    print(X_test,y_test)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 切分数据，按照一定比例"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 56,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[[1 2]\n",
      " [3 4]] [1 0]\n",
      "[[3 4]\n",
      " [1 2]] [1 0]\n"
     ]
    }
   ],
   "source": [
    "x_train,x_test,y_train,y_test = train_test_split(X,y,test_size = 0.5,shuffle=True)\n",
    "print(x_train,y_train)\n",
    "print(x_test,y_test)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 利用交叉验证进行训练"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 57,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.datasets import load_iris\n",
    "from sklearn.linear_model import LogisticRegression\n",
    "from sklearn.metrics import accuracy_score"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 59,
   "metadata": {},
   "outputs": [],
   "source": [
    "X,y = load_iris(return_X_y = True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 79,
   "metadata": {},
   "outputs": [],
   "source": [
    "model = LogisticRegression(solver = 'liblinear', multi_class='ovr')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 84,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[0.9, 0.9333333333333333, 0.9, 1.0, 1.0] 0.9466666666666667\n"
     ]
    }
   ],
   "source": [
    "kf  =  KFold(n_splits= 5, shuffle=True, random_state=10)\n",
    "acc = []\n",
    "for train_index,test_index in kf.split(X):\n",
    "    x_train = X[train_index]\n",
    "    y_train = y[train_index]\n",
    "    \n",
    "    x_test = X[test_index]\n",
    "    y_test = y[test_index]\n",
    "    \n",
    "    model.fit(x_train,y_train)\n",
    "    pre = model.predict(x_test)\n",
    "    acc.append(accuracy_score(y_test,pre))\n",
    "print(acc,np.mean(acc))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 使用封装好的cross_validate  \n",
    "评估函数：https://scikit-learn.org/stable/modules/model_evaluation.html#model-evaluation"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 99,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[0.9        0.93333333 0.9        1.         1.        ] 0.9466666666666667\n"
     ]
    }
   ],
   "source": [
    "from sklearn.model_selection import cross_validate\n",
    "\n",
    "scores = cross_validate(model,X,y,cv =kf,scoring='accuracy')\n",
    "print(scores['test_score'],np.mean(scores['test_score']))"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.4"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
