{
 "cells": [
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "[4.机器学习-十大算法之一线性回归算法(LinearRegression)案例讲解-CSDN博客](https://blog.csdn.net/weixin_50804299/article/details/136328878?ops_request_misc=%7B%22request%5Fid%22%3A%228aa5eeb4d502b92141356b247f20e8bb%22%2C%22scm%22%3A%2220140713.130102334..%22%7D&request_id=8aa5eeb4d502b92141356b247f20e8bb&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~all~top_positive~default-1-136328878-null-null.142^v102^pc_search_result_base2&utm_term=线性回归&spm=1018.2226.3001.4187)\n",
    "\n",
    "\n",
    "# 线性回归\n",
    "\n",
    "线性回归是一种用于预测数值型数据的统计学分析方法，它通过建立一个或多个自变量与因变量之间的线性关系来进行预测。\n",
    "\n",
    "线性回归的基本思想是通过拟合最佳直线（也就是线性方程），来描述自变量和因变量之间的关系。这条直线被称为回归线，其目的是使得所有数据点到这条直线的垂直距离（即残差）的平方和最小。这个最小化过程通常称为最小二乘法。\n",
    "\n",
    "> 最小二乘法的核心思想非常直观：**找到一条直线，使得所有数据点到这条直线的垂直距离（误差）的平方和最小。**\n",
    ">\n",
    "> - **“二乘”**：就是“平方”的意思。\n",
    "> - **“最小”**：就是让这个平方和达到最小值。\n",
    "\n",
    "## 一、什么是线性回归\n",
    "\n",
    "线性回归是一种非常简单易用且应用广泛的算法，其原理也非常容易理解，因此非常适合作为机器学习的入门算法。当我们在中学学习二元一次方程时，我们通常将y视为因变量，x视为自变量，从而得到方程：$y = ax + b$\n",
    "\n",
    "其中，a 是斜率，表示自变量 x对因变量 y的影响程度；b是截距，表示当 x = 0时 y的值。线性回归的目标就是找到最佳的参数 a和 b，使得预测值$\\hat{y}$与实际值 y之间的差异（通常是平方差）最小。\n",
    "\n",
    "线性回归模型可以通过最小化损失函数来求解参数，常用的损失函数是均方误差（Mean Squared Error, MSE）。通过解析方法或数值优化方法，如梯度下降（Gradient Descent），可以找到最优的参数 a 和 b 。\n",
    "\n",
    "## 二、`LinearRegression`使用方法\n",
    "\n",
    "1. 导入库和数据：首先需要导入`scikit-learn`库，并加载需要进行线性回归分析的数据集。\n",
    "2. 创建模型对象：通过`sklearn.linear_model.LinearRegression()`创建一个线性回归模型的对象。在这一步中，您可以根据需要设置模型的参数，例如是否计算截距（`fit_intercept`）等。\n",
    "3. 训练模型：使用`fit()`方法来训练模型。这通常涉及到将训练数据（特征和标签）传递给`fit()`方法。例如：`lineModel.fit(X_train, Y_train)`，其中`X_train`是训练特征数据，`Y_train`是对应的标签数据。\n",
    "4. 进行预测：训练完成后，可以使用`predict()`方法来对新的数据进行预测。例如:`lineModel.predict(X_test)`，其中`X_test`是您想要预测的新数据。\n",
    "5. 评估模型：最后，可以使用`score()`方法来评价模型的性能，这通常会给出一个表示模型拟合优度的$R^2$分数或其他指标。\n",
    "\n"
   ],
   "id": "e03d4fa7886e4a7d"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "# 糖尿病数据线性回归预测",
   "id": "549c2292396ca89f"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "## 1.导入必要的库",
   "id": "19d09333ec327c2b"
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "# 导入三个常用的数据处理和可视化库：numpy、pandas和matplotlib。\n",
    "import numpy as np\n",
    "import pandas as pd\n",
    "import matplotlib.pyplot as plt\n",
    "\n",
    "# 模型和评估\n",
    "from sklearn.linear_model import LinearRegression\n",
    "from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score\n",
    "\n",
    "# 从sklearn库中导入糖尿病数据集。\n",
    "from sklearn.datasets import load_diabetes\n",
    "\n",
    "# 数据预处理\n",
    "from sklearn.preprocessing import StandardScaler\n",
    "\n",
    "# 数据划分\n",
    "from sklearn.model_selection import train_test_split\n",
    "\n",
    "# 可视化\n",
    "import seaborn as sns"
   ],
   "id": "7ca54e27371f2e2f",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "## 2.加载并探索数据\n",
    "\n",
    "### 2.1. 数据集背景\n",
    "\n",
    "- **来源**：该数据集来自 [North Carolina State University](https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html)，由 Bradley Efron 等人在 2004 年的论文《Least Angle Regression》中引入。\n",
    "- **研究目的**：通过 10 个基线变量（如年龄、性别、BMI 等）预测糖尿病患者 **1 年后疾病进展的定量指标**（即目标变量）。\n",
    "\n",
    "### 2.2. 数据集结构\n",
    "\n",
    "- **样本量（Instances）**：442 名糖尿病患者。\n",
    "\n",
    "- 特征（Attributes）：\n",
    "\n",
    "  - 前 10 列：数值型预测变量（特征），包括：\n",
    "\n",
    "    1. `age`：年龄（岁）\n",
    "    2. `sex`：性别\n",
    "    3. `bmi`：身体质量指数（BMI）\n",
    "    4. `bp`：平均血压\n",
    "    5. `s1`：总血清胆固醇（tc）\n",
    "    6. `s2`：低密度脂蛋白（ldl）\n",
    "    7. `s3`：高密度脂蛋白（hdl）\n",
    "    8. `s4`：总胆固醇与 HDL 的比值（tch）\n",
    "    9. `s5`：可能为血清甘油三酯水平的对数（ltg）\n",
    "    10. `s6`：血糖水平（glu）\n",
    "\n",
    "  - **第 11 列**：目标变量（定量值），表示 **1 年后疾病进展的程度**。"
   ],
   "id": "bea2f231932b2cd"
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "# 加载糖尿病数据集\n",
    "diabetes = load_diabetes()\n",
    "\n",
    "# 将数据转换为DataFrame，便于查看和处理\n",
    "# data是特征数据，feature_names是特征列名\n",
    "df_features = pd.DataFrame(diabetes.data, columns=diabetes.feature_names)\n",
    "\n",
    "# target是目标变量（一年后疾病的定量测量指标）\n",
    "df_target = pd.DataFrame(diabetes.target, columns=['target'])"
   ],
   "id": "9ce760ac4542aa2a",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "# 查看数据集描述，了解背景信息\n",
    "print(diabetes.DESCR)\n",
    "\n",
    "print(\"\\n特征数据形状:\", df_features.shape)\n",
    "print(\"目标数据形状:\", df_target.shape)\n",
    "\n",
    "# 查看前5行数据，了解特征的样子\n",
    "print(\"\\n特征数据预览:\")\n",
    "print(df_features.head())\n",
    "\n",
    "# 查看目标变量预览\n",
    "print(\"\\n目标变量预览:\")\n",
    "print(df_target.head())"
   ],
   "id": "7f92c955fe7c8739",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": "df_features",
   "id": "dabd6f1e7e846623",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": "df_target",
   "id": "5cd3035d9dafe6a1",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "## 3.数据预处理\n",
    "这个数据集的一个优点是，它已经被预先处理过（每个特征都进行了均值为0、标准差为0.2的缩放）。但为了演示标准流程，我们通常会进行数据划分和标准化。"
   ],
   "id": "86771ebc0d204a3a"
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "# 划分特征（X）和目标变量（y)\n",
    "X = diabetes.data\n",
    "y = diabetes.target\n",
    "\n",
    "# 将数据集随机划分为训练集（80%）和测试集（20%）\n",
    "# random_state保证每次划分结果一致，便于复现结果\n",
    "X_train, X_test, y_train, y_test = train_test_split(X,\n",
    "                                                    y,\n",
    "                                                    test_size=0.2,\n",
    "                                                    random_state=42)"
   ],
   "id": "53703cbcc5c5ef98",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "print(\"训练集大小:\", X_train.shape)\n",
    "print(\"测试集大小:\", X_test.shape)\n",
    "X_train"
   ],
   "id": "afe92d7bb52258ff",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "# 特征标准化/归一化（虽然原数据已经被处理过，但为了演示，这里再次进行标准化）\n",
    "# 注意：先fit训练集，然后用同样的scaler对测试集进行转换\n",
    "scaler = StandardScaler()\n",
    "X_train_scaled = scaler.fit_transform(X_train)\n",
    "X_test_scaled = scaler.transform(X_test)"
   ],
   "id": "a2c02444ea9b82c0",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": "X_train_scaled",
   "id": "8d9a821ee7e8422e",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "## 4.创建、训练线性回归模型",
   "id": "f882434882a4a91b"
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "# 创建线性回归模型对象\n",
    "# 默认使用的就是最小二乘法求解\n",
    "lineModel = LinearRegression()\n",
    "\n",
    "# 训练模型\n",
    "lineModel.fit(X_train_scaled, y_train)\n"
   ],
   "id": "a510f40f046809f2",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "# 查看模型学到的参数（权重和偏置）\n",
    "print(f\"模型截距（b）：{lineModel.intercept_}\")\n",
    "print(f\"模型参数（w）：\")\n",
    "for i, col_name in enumerate(diabetes.feature_names):\n",
    "    print(f\"{col_name}: {lineModel.coef_[i]}\")"
   ],
   "id": "68169d004f4114c6",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "## 5.在测试集上进行预测并评估模型\n",
    "这是检验模型好坏的关键步骤。"
   ],
   "id": "4a8fada911f0a8b"
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "# 使用训练好的模型对测试集进行预测\n",
    "lr_pred = lineModel.predict(X_test_scaled)\n",
    "\n",
    "# 计算评估指标\n",
    "lr_mse = mean_squared_error(y_test, lr_pred)  # 均方误差\n",
    "lr_rmse = np.sqrt(lr_mse)  # 均方根误差，与目标变量单位一致，更易解释\n",
    "lr_mae = mean_absolute_error(y_test, lr_pred)  # 平均绝对误差\n",
    "lr_r2 = r2_score(y_test, lr_pred)  # R²决定系数，越接近1越好\n",
    "\n",
    "# 创建一个DataFrame来显示评估结果\n",
    "results_df = pd.DataFrame({\n",
    "    '真实值': y_test,\n",
    "    '预测值': lr_pred,\n",
    "    '误差': y_test - lr_pred\n",
    "})\n"
   ],
   "id": "1d32f6fe0764068d",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "print(\"\\n=====模型评估结果=====\")\n",
    "print(f\"均方误差（MSE）：{lr_mse:.2f}\")\n",
    "print(f\"均方根误差（RMSE）：{lr_rmse:.2f}\")\n",
    "print(f\"平均绝对误差（MAE）：{lr_mae:.2f}\")\n",
    "print(f\"决定系数（R²）：{lr_r2:.2f}\")\n",
    "\n",
    "print(\"\\n=====模型预测结果=====\")\n",
    "print(results_df.head())"
   ],
   "id": "ef94b52d5c3c5fe5",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "## 6.结果可视化\n",
    "可视化可以帮助我们更直观地理解模型性能。"
   ],
   "id": "d9d87af46430b37f"
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "# 设置绘图风格\n",
    "plt.style.use('default')  # 使用默认风格避免兼容性问题\n",
    "fig, axes = plt.subplots(1, 2, figsize=(14, 6))\n",
    "\n",
    "# 1. 真实值 vs 预测值 散点图\n",
    "axes[0].scatter(y_test, lr_pred, alpha=0.6)\n",
    "axes[0].plot([y_test.min(), y_test.max()], [y_test.min(), y_test.max()], 'r--', lw=2)\n",
    "axes[0].set_xlabel('True Value')\n",
    "axes[0].set_ylabel('Predicted Value')\n",
    "axes[0].set_title('True vs Predicted Values')\n",
    "axes[0].grid(True)\n",
    "\n",
    "# 2. 误差分布直方图\n",
    "axes[1].hist(results_df['误差'], bins=30, edgecolor='black', alpha=0.7)\n",
    "axes[1].axvline(x=0, color='r', linestyle='--')\n",
    "axes[1].set_xlabel('Prediction Error')\n",
    "axes[1].set_ylabel('Frequency')\n",
    "axes[1].set_title('Prediction Error Distribution')\n",
    "\n",
    "plt.tight_layout()\n",
    "plt.show()\n",
    "\n",
    "# 3. 特征重要性条形图\n",
    "feature_importance = pd.DataFrame({\n",
    "    'feature': diabetes.feature_names,\n",
    "    'coefficient': lineModel.coef_,\n",
    "    'abs_coefficient': np.abs(lineModel.coef_)\n",
    "}).sort_values('abs_coefficient', ascending=False)\n",
    "\n",
    "plt.figure(figsize=(10, 6))\n",
    "plt.barh(feature_importance['feature'], feature_importance['abs_coefficient'])\n",
    "plt.xlabel('Absolute Coefficient (Feature Importance)')\n",
    "plt.title('Linear Regression Feature Importance')\n",
    "plt.gca().invert_yaxis()\n",
    "plt.show()"
   ],
   "id": "6e2f15a4d0fb98ac",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "# 随机森林回归预测",
   "id": "c06715d4096e7960"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "## 4.创建并训练随机森林模型\n",
    "注意：随机森林不需要特征缩放，但为了与线性回归使用相同的数据，我们继续用scaled数据"
   ],
   "id": "e811a537e275cb48"
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "from sklearn.ensemble import RandomForestRegressor\n",
    "\n",
    "rf_model = RandomForestRegressor(n_estimators=100,  # 树的数量\n",
    "                                 random_state=42,  # 随机数种子,保证模型可复现\n",
    "                                 max_depth=100,  # 树的最大深度\n",
    "                                 min_samples_split=2,  # 节点分裂所需的最小样本数\n",
    "                                 n_jobs=-1  # 使用所有CPU核心进行并行计算\n",
    "                                 )\n",
    "# 训练模型\n",
    "rf_model.fit(X_train_scaled, y_train)"
   ],
   "id": "d7670ee4ae270877",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "# 查看模型参数\n",
    "print(\"\\n=====模型参数=====\")\n",
    "print(rf_model.get_params())\n"
   ],
   "id": "546f0f8286ab1a54",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "## 5.随机森林预测与评估",
   "id": "5b7fd74ad93cb4bf"
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "# 设置中文字体\n",
    "plt.rcParams['font.sans-serif'] = ['SimHei', 'Microsoft YaHei', 'DejaVu Sans']  # 设置中文字体\n",
    "plt.rcParams['axes.unicode_minus'] = False  # 解决负号显示问题"
   ],
   "id": "38518b7b77fa80a3",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "rf_pred = rf_model.predict(X_test_scaled)\n",
    "rf_mse = mean_squared_error(y_test, rf_pred)\n",
    "rf_rmse = np.sqrt(rf_mse)\n",
    "rf_mae = mean_absolute_error(y_test, rf_pred)\n",
    "rf_r2 = r2_score(y_test, rf_pred)\n"
   ],
   "id": "fdddf82e16514061",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "print(\"\\n=====模型评估结果=====\")\n",
    "print(f\"随机森林 - 均方误差（MSE）：{rf_mse:.2f}\")\n",
    "print(f\"随机森林 - 均方根误差（RMSE）：{rf_rmse:.2f}\")\n",
    "print(f\"随机森林 - 平均绝对误差（MAE）：{rf_mae:.2f}\")\n",
    "print(f\"随机森林 - 决定系数（R²）：{rf_r2:.2f}\")\n"
   ],
   "id": "fe08ac69644a812b",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "## 6.结果可视化",
   "id": "94d605a420fbc091"
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "# 随机森林结果\n",
    "plt.subplot(1, 2, 2)\n",
    "plt.scatter(y_test, rf_pred, alpha=0.6, label='预测值')\n",
    "plt.plot([y.min(), y.max()], [y.min(), y.max()], 'r--', lw=2, label='完美预测')\n",
    "plt.xlabel('真实值')\n",
    "plt.ylabel('预测值')\n",
    "plt.title('随机森林: 真实值 vs 预测值\\nR² = {:.3f}'.format(rf_r2))\n",
    "plt.legend()\n",
    "plt.grid(True)\n",
    "\n",
    "plt.tight_layout()\n",
    "plt.show()"
   ],
   "id": "7bf05470ee24482a",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "# 12. 随机森林特有的分析 - 特征重要性\n",
    "print(\"\\n\" + \"=\" * 50)\n",
    "print(\"随机森林特征重要性分析\")\n",
    "print(\"=\" * 50)\n",
    "\n",
    "# 获取特征重要性\n",
    "feature_importance = pd.DataFrame({\n",
    "    'feature': diabetes.feature_names,\n",
    "    'importance': rf_model.feature_importances_\n",
    "}).sort_values('importance', ascending=False)\n",
    "\n",
    "print(\"特征重要性排名:\")\n",
    "for i, row in feature_importance.iterrows():\n",
    "    print(f\"{row['feature']}: {row['importance']:.4f}\")\n",
    "\n",
    "# 可视化特征重要性\n",
    "plt.figure(figsize=(10, 6))\n",
    "plt.barh(feature_importance['feature'], feature_importance['importance'])\n",
    "plt.xlabel('特征重要性')\n",
    "plt.title('随机森林特征重要性排名')\n",
    "plt.gca().invert_yaxis()  # 最重要的特征显示在最上面\n",
    "plt.tight_layout()\n",
    "plt.show()"
   ],
   "id": "6f9c3ca2b2f4f327",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "# 模型性能对比",
   "id": "651cac85f931de37"
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "print(\"\\n\" + \"=\" * 50)\n",
    "print(\"模型对比\")\n",
    "print(\"=\" * 50)\n"
   ],
   "id": "73c6d6ca4e1dc17a",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "comparison = pd.DataFrame({\n",
    "    'Metric': ['RMSE', 'MAE', 'R² Score'],\n",
    "    'Linear Regression': [lr_rmse, lr_mae, lr_r2],\n",
    "    'Random Forest': [rf_rmse, rf_mae, rf_r2]\n",
    "})\n",
    "print(comparison)"
   ],
   "id": "45a7e66095c3c198",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "# 10. 计算改进程度\n",
    "improvement = pd.DataFrame({\n",
    "    'Metric': ['RMSE', 'MAE', 'R² Score'],\n",
    "    'Improvement': [\n",
    "        f\"{(lr_rmse - rf_rmse) / lr_rmse * 100:.1f}%\",\n",
    "        f\"{(lr_mae - rf_mae) / lr_mae * 100:.1f}%\",\n",
    "        f\"{(rf_r2 - lr_r2) / lr_r2 * 100:.1f}%\"\n",
    "    ]\n",
    "})\n",
    "print(f\"\\n随机森林相对于线性回归的改进:\")\n",
    "print(improvement)"
   ],
   "id": "1dbf1004b1bd26c1",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "# 11. 可视化对比 - 真实值 vs 预测值\n",
    "plt.figure(figsize=(12, 5))\n",
    "\n",
    "# 线性回归结果\n",
    "plt.subplot(1, 2, 1)\n",
    "plt.scatter(y_test, lr_pred, alpha=0.6, label='预测值')\n",
    "plt.plot([y.min(), y.max()], [y.min(), y.max()], 'r--', lw=2, label='完美预测')\n",
    "plt.xlabel('真实值')\n",
    "plt.ylabel('预测值')\n",
    "plt.title('线性回归: 真实值 vs 预测值\\nR² = {:.3f}'.format(lr_r2))\n",
    "plt.legend()\n",
    "plt.grid(True)\n",
    "\n",
    "# 随机森林结果\n",
    "plt.subplot(1, 2, 2)\n",
    "plt.scatter(y_test, rf_pred, alpha=0.6, label='预测值')\n",
    "plt.plot([y.min(), y.max()], [y.min(), y.max()], 'r--', lw=2, label='完美预测')\n",
    "plt.xlabel('真实值')\n",
    "plt.ylabel('预测值')\n",
    "plt.title('随机森林: 真实值 vs 预测值\\nR² = {:.3f}'.format(rf_r2))\n",
    "plt.legend()\n",
    "plt.grid(True)\n",
    "\n",
    "plt.tight_layout()\n",
    "plt.show()"
   ],
   "id": "bf1adbf18c1a1db2",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": "",
   "id": "6161e4d4a0b99626",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": "",
   "id": "2f4f3a957d8deff3",
   "outputs": [],
   "execution_count": null
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 2
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython2",
   "version": "2.7.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
