{
 "metadata": {
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.12-final"
  },
  "orig_nbformat": 2,
  "kernelspec": {
   "name": "python361264bittfconda6792a3e09dd440c78bca3d69354576cc",
   "display_name": "Python 3.6.12 64-bit ('tf': conda)",
   "language": "python"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2,
 "cells": [
  {
   "source": [
    "## 均值回归模型\n",
    "\n",
    "- 定义均值：10天移动均值\n",
    "\n",
    "- 买入·卖出标准：移动均值和股价的差额绝对值大于标准差则执行买入/卖出\n",
    "\n",
    "- 数据选择：使用收盘价\n",
    "\n",
    "## 机器学习模型\n",
    "\n",
    "- 输入变量\n",
    "\n",
    "    - 股价数据\n",
    "\n",
    "    - 交易额数据\n",
    "\n",
    "    - 指数数据\n",
    "\n",
    "    - 外部数据\n",
    "\n",
    "    - 企业数据\n",
    "\n",
    "- 分类模型\n",
    "\n",
    "    - 逻辑斯蒂回归\n",
    "\n",
    "    - 决策树和随机森林\n",
    "\n",
    "    - 支持向量机"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 导入\n",
    "import pandas as pd\n",
    "import numpy as np\n",
    "import pandas_datareader as web\n",
    "from pprint import pprint\n",
    "import datetime\n",
    "import matplotlib.pyplot as plt\n",
    "\n",
    "from sklearn.model_selection import TimeSeriesSplit\n",
    "\n",
    "from sklearn.ensemble import RandomForestClassifier\n",
    "from sklearn.linear_model import LogisticRegression\n",
    "from sklearn.svm import SVC\n",
    "\n",
    "from sklearn.metrics import confusion_matrix, classification_report, roc_curve, auc\n",
    "\n",
    "from scipy.stats import randint as sp_randint\n",
    "from sklearn.model_selection import RandomizedSearchCV\n",
    "\n",
    "plt.style.use(['science', 'ieee', 'grid', 'muted'])\n"
   ]
  },
  {
   "source": [
    "### 数据集"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "def download_stock_data(file_name, company_code, year_start, month_start, date_start, year_end, month_end, date_end):\n",
    "    \"\"\"\n",
    "    负责下载股价数据（下载自“雅虎财经”）\n",
    "\n",
    "    ---params---\n",
    "\n",
    "    file_name: 指定保存已下载股价数据的文件名\n",
    "\n",
    "    company_code: 指定股票代码\n",
    "\n",
    "    year_start/month_start/date_start, year_end/month_end/date_end: 指定希望收集数据的日期\n",
    "\n",
    "    \"\"\"\n",
    "    start = datetime.datetime(year_start, month_start, date_start)\n",
    "    end = datetime.datetime(year_end, month_end, date_end)\n",
    "    df = web.DataReader(\"%s.KS\" % (company_code), \"yahoo\", start, end) # .ks后缀表示韩国企业\n",
    "    df.to_pickle(file_name) # 生成pickle文件对数据进行永久储存\n",
    "\n",
    "\n",
    "def load_stock_data(file_name):\n",
    "    \"\"\"原始数据加载\"\"\"\n",
    "    return pd.read_pickle(file_name)\n",
    "\n",
    "def make_dataset(df, time_lags=5):\n",
    "    \"\"\"创建数据集\"\"\"\n",
    "    df_lag = pd.DataFrame(index=df.index)\n",
    "    df_lag[\"Close\"] = df[\"Close\"] # 收盘价\n",
    "    df_lag[\"Volume\"] = df[\"Volume\"] # 成交量\n",
    "\n",
    "    df_lag[\"Close_Lag%s\" % str(time_lags)] = df[\"Close\"].shift(time_lags) # 用户指定日期的收盘价\n",
    "    df_lag[\"Close_Lag%s_Change\" % str(time_lags)] = df_lag[\"Close_Lag%s\" % str(time_lags)].pct_change()*100.0 # 用百分比计算既有数据变化\n",
    "\n",
    "    df_lag[\"Volume_Lag%s\" % str(time_lags)] = df[\"Volume\"].shift(time_lags) # 用户指定日期的交易额\n",
    "    df_lag[\"Volume_Lag%s_Change\" % str(time_lags)] = df_lag[\"Volume_Lag%s\" % str(time_lags)].pct_change()*100.0\n",
    "\n",
    "    df_lag[\"Close_Direction\"] = np.sign(df_lag[\"Close_Lag%s_Change\" % str(time_lags)]) # 股价走势\n",
    "    df_lag[\"Volume_Direction\"] = np.sign(df_lag[\"Volume_Lag%s_Change\" % str(time_lags)]) # 成交量走势\n",
    "\n",
    "    return df_lag.dropna(how='any')"
   ]
  },
  {
   "source": [
    "### 拆分数据集\n",
    "\n",
    "在机器学习中还存在着一种叫做时间序列的数据类型，这种数据的特点是高度的自相关性，前后相邻时段的数据关联程度非常高，因此在对这种数据进行分割时不可以像其他机器学习任务那样简单随机抽样的方式采样，对时间序列数据的采样不能破坏其时段的连续型，在$sklearn.model\\_selection$中我们使用$TimeSeriesSplit()$来分割时序数据，其主要参数如下：\n",
    "\n",
    "- $n\\_splits$：$int$型，控制产生（训练集+验证集）的数量；\n",
    "\n",
    "- $max\\_train\\_size$：控制最大的时序数据长度；"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "def split_dataset(df, input_column, output_column, split_ratio, n_splits):\n",
    "    \"\"\"时间序列数据集的划分\"\"\"\n",
    "    # 拆分训练集与测试集\n",
    "    cv_generator = TimeSeriesSplit(max_train_size=int(df.shape[0]*split_ratio), n_splits=n_splits) # 迭代器\n",
    "    return cv_generator\n",
    "    "
   ]
  },
  {
   "source": [
    "### 生成股价走势预测变量"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "def make_classifier(X_train, y_train, name):\n",
    "    \"\"\"创建分类器\"\"\"\n",
    "    if 'LR' == name: # 逻辑斯蒂回归\n",
    "        classifier = LogisticRegression()\n",
    "    elif 'RF' == name: # 随机森林\n",
    "        classifier = RandomForestClassifier()\n",
    "    else: # 支持向量机\n",
    "        classifier = SVC()\n",
    "    classifier.fit(X_train, y_train)\n",
    "    return classifier\n",
    "    "
   ]
  },
  {
   "source": [
    "### 股价走势预测变量的评价"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [],
   "source": [
    "def test_predictor(classifier, X_test, y_test):\n",
    "    \"\"\"模型评价\"\"\"\n",
    "    y_pred = classifier.predict(X_test)\n",
    "\n",
    "    score = classifier.score(X_test, y_test)\n",
    "    cm = confusion_matrix(y_pred, y_test)\n",
    "\n",
    "    return score, cm\n",
    "    "
   ]
  },
  {
   "source": [
    "### 运行算法"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "- Time Lags = 1\n",
      "Classifier: LR\tAverage Score: 0.75\n",
      "Classifier: RF\tAverage Score: 0.95\n",
      "Classifier: SVM\tAverage Score: 0.48\n",
      "\n",
      "- Time Lags = 2\n",
      "Classifier: LR\tAverage Score: 0.87\n",
      "Classifier: RF\tAverage Score: 0.94\n",
      "Classifier: SVM\tAverage Score: 0.49\n",
      "\n",
      "- Time Lags = 3\n",
      "Classifier: LR\tAverage Score: 0.87\n",
      "Classifier: RF\tAverage Score: 0.96\n",
      "Classifier: SVM\tAverage Score: 0.49\n",
      "\n",
      "- Time Lags = 4\n",
      "Classifier: LR\tAverage Score: 0.69\n",
      "Classifier: RF\tAverage Score: 0.95\n",
      "Classifier: SVM\tAverage Score: 0.48\n",
      "\n",
      "- Time Lags = 5\n",
      "Classifier: LR\tAverage Score: 0.86\n",
      "Classifier: RF\tAverage Score: 0.94\n",
      "Classifier: SVM\tAverage Score: 0.48\n",
      "\n"
     ]
    }
   ],
   "source": [
    "download_stock_data('data/samsung.data', '005930', 2020, 1, 1, 2020, 12, 31)\n",
    "samsung_raw = load_stock_data('data/samsung.data')\n",
    "\n",
    "for time_lags in range(1, 6):\n",
    "    samsung_dataset = make_dataset(samsung_raw, time_lags=time_lags)\n",
    "    input_column = ['Close', 'Close_Lag%s' % str(time_lags), 'Close_Lag%s_Change' % str(time_lags)]\n",
    "    output_column = ['Close_Direction']\n",
    "    X, y = samsung_dataset[input_column].values, samsung_dataset[output_column].values\n",
    "    print('- Time Lags = {}'.format(time_lags))\n",
    "    for clsfr_name in ['LR', 'RF', 'SVM']:\n",
    "        scores = []\n",
    "        tscv = split_dataset(samsung_dataset, input_column, output_column, split_ratio=0.25, n_splits=10)\n",
    "        for train_index, test_index in tscv.split(X):\n",
    "            # 数据集\n",
    "            X_train, y_train, X_test, y_test = X[train_index], y[train_index], X[test_index], y[test_index]\n",
    "            # 模型\n",
    "            classifier = make_classifier(X_train, y_train, clsfr_name)\n",
    "            # 评价\n",
    "            score, _ = test_predictor(classifier, X_test, y_test)\n",
    "            scores.append(score)\n",
    "        print(\"Classifier: {}\\tAverage Score: {:.2f}\".format(clsfr_name, np.array(scores).mean()))\n",
    "\n",
    "    print('')"
   ]
  },
  {
   "source": [
    "# 第5章 实现算法交易系统\n",
    "\n",
    "## 开发环境\n",
    "\n",
    "- 语言： Python 3.6\n",
    "\n",
    "- 操作系统：支持Python相关库的Linux、OS X、Windows等\n",
    "\n",
    "- 数据库：MySQL\n",
    "\n",
    "- 库\n",
    "    + pandas\n",
    "\n",
    "    + Numpy\n",
    "\n",
    "    + matplotlib\n",
    "\n",
    "    + Beautiful Soup\n",
    "\n",
    "    + MySQLdb\n",
    "\n",
    "    + scikit-learn\n",
    "\n",
    "    + Statsmodels\n",
    "\n",
    "## 实现系统的概要\n",
    "\n",
    "- 数据爬虫\n",
    "\n",
    "- 数据库\n",
    "\n",
    "- 均值回归模型\n",
    "\n",
    "- 机器学习模型\n",
    "\n",
    "- $\\alpha$模型\n",
    "\n",
    "- 投资组合生成器\n",
    "\n",
    "- $Trader$类\n",
    "\n",
    "- 回测"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "source": [
    "# 第6章 性能评价与优化\n",
    "\n",
    "## 1、算法交易系统的性能测试\n",
    "\n",
    "## 2、机器学习性能测试\n",
    "\n",
    "+ 混淆矩阵\n",
    "\n",
    "+ Classification Report\n",
    "\n",
    "+ ROC\n",
    "\n",
    "## 3、超参数优化\n",
    "\n",
    "+ 网格搜索\n",
    "\n",
    "+ 随机搜索"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [],
   "source": [
    "def draw_roc(y_true, y_pred):\n",
    "    \"\"\"绘制ROC曲线\"\"\"\n",
    "    fpr, tpr, thresholds = roc_curve(y_true, y_pred)\n",
    "    roc_auc = auc(fpr, tpr)\n",
    "\n",
    "    plt.title('Receiver Operating Characteristic')\n",
    "    plt.plot(fpr, tpr, 'b', label='AUC = %0.2f' % roc_auc)\n",
    "    plt.legend(loc='lower right')\n",
    "    plt.plot([0, 1], [0, 1], 'r--')\n",
    "\n",
    "    plt.xlim([-0.1, 1.2])\n",
    "    plt.ylim([-0.1, 1.2])\n",
    "    plt.xlabel('Specificity')\n",
    "    plt.ylabel('Sensitivity')\n",
    "    plt.show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 54,
   "metadata": {},
   "outputs": [],
   "source": [
    "def optimize_hyperparameter_by_random_search(classifier, param_dist, X_train, y_train, X_test, y_test, iter_count=20):\n",
    "    \"\"\"利用随机搜索对随机森林进行超参数优化\"\"\"\n",
    "    random_search = RandomizedSearchCV(classifier, param_distributions=param_dist, n_iter=iter_count)\n",
    "    results = random_search.fit(X_train, y_train)\n",
    "\n",
    "    for params, mean_score, std_score in zip(results.cv_results_['params'], results.cv_results_['mean_test_score'], results.cv_results_['std_test_score']):\n",
    "        print(\"{:.3f} (+/-{:.3f}) for {}\".format(mean_score, std_score * 2, params), end='\\n\\n')\n",
    "    \n",
    "    return results.best_params_"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 55,
   "metadata": {},
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "1.000 (+/-0.000) for {'bootstrap': True, 'criterion': 'gini', 'max_depth': None, 'max_features': 2, 'min_samples_leaf': 1, 'min_samples_split': 6}\n\n0.900 (+/-0.067) for {'bootstrap': True, 'criterion': 'entropy', 'max_depth': 3, 'max_features': 2, 'min_samples_leaf': 8, 'min_samples_split': 9}\n\n1.000 (+/-0.000) for {'bootstrap': True, 'criterion': 'gini', 'max_depth': None, 'max_features': 2, 'min_samples_leaf': 1, 'min_samples_split': 4}\n\n0.850 (+/-0.356) for {'bootstrap': True, 'criterion': 'entropy', 'max_depth': None, 'max_features': 1, 'min_samples_leaf': 3, 'min_samples_split': 10}\n\n0.850 (+/-0.356) for {'bootstrap': False, 'criterion': 'entropy', 'max_depth': None, 'max_features': 1, 'min_samples_leaf': 6, 'min_samples_split': 6}\n\n0.867 (+/-0.200) for {'bootstrap': False, 'criterion': 'entropy', 'max_depth': None, 'max_features': 1, 'min_samples_leaf': 10, 'min_samples_split': 3}\n\n0.900 (+/-0.067) for {'bootstrap': True, 'criterion': 'gini', 'max_depth': 3, 'max_features': 1, 'min_samples_leaf': 9, 'min_samples_split': 2}\n\n0.900 (+/-0.067) for {'bootstrap': True, 'criterion': 'entropy', 'max_depth': 3, 'max_features': 1, 'min_samples_leaf': 6, 'min_samples_split': 8}\n\n0.900 (+/-0.067) for {'bootstrap': True, 'criterion': 'entropy', 'max_depth': None, 'max_features': 2, 'min_samples_leaf': 9, 'min_samples_split': 4}\n\n0.900 (+/-0.067) for {'bootstrap': True, 'criterion': 'gini', 'max_depth': 3, 'max_features': 1, 'min_samples_leaf': 7, 'min_samples_split': 8}\n\n0.967 (+/-0.082) for {'bootstrap': True, 'criterion': 'entropy', 'max_depth': None, 'max_features': 1, 'min_samples_leaf': 1, 'min_samples_split': 5}\n\n0.933 (+/-0.125) for {'bootstrap': True, 'criterion': 'gini', 'max_depth': 3, 'max_features': 1, 'min_samples_leaf': 2, 'min_samples_split': 2}\n\n0.933 (+/-0.267) for {'bootstrap': False, 'criterion': 'entropy', 'max_depth': None, 'max_features': 2, 'min_samples_leaf': 1, 'min_samples_split': 2}\n\n0.917 (+/-0.105) for {'bootstrap': False, 'criterion': 'entropy', 'max_depth': 3, 'max_features': 2, 'min_samples_leaf': 8, 'min_samples_split': 2}\n\n0.900 (+/-0.245) for {'bootstrap': False, 'criterion': 'gini', 'max_depth': None, 'max_features': 1, 'min_samples_leaf': 1, 'min_samples_split': 3}\n\n1.000 (+/-0.000) for {'bootstrap': True, 'criterion': 'gini', 'max_depth': None, 'max_features': 2, 'min_samples_leaf': 2, 'min_samples_split': 9}\n\n0.900 (+/-0.245) for {'bootstrap': True, 'criterion': 'gini', 'max_depth': None, 'max_features': 1, 'min_samples_leaf': 1, 'min_samples_split': 4}\n\n0.833 (+/-0.333) for {'bootstrap': False, 'criterion': 'gini', 'max_depth': 3, 'max_features': 1, 'min_samples_leaf': 6, 'min_samples_split': 2}\n\n0.933 (+/-0.267) for {'bootstrap': False, 'criterion': 'entropy', 'max_depth': None, 'max_features': 2, 'min_samples_leaf': 1, 'min_samples_split': 4}\n\n0.933 (+/-0.125) for {'bootstrap': False, 'criterion': 'gini', 'max_depth': 3, 'max_features': 2, 'min_samples_leaf': 7, 'min_samples_split': 10}\n\n"
     ]
    }
   ],
   "source": [
    "param_dist = {\n",
    "    \"max_depth\": [3, None],\n",
    "    \"criterion\": [\"gini\", \"entropy\"],\n",
    "    \"min_samples_split\": sp_randint(2, 11),\n",
    "    \"min_samples_leaf\": sp_randint(1, 11),\n",
    "    \"max_features\": sp_randint(1, X_train.shape[1]),\n",
    "    \"bootstrap\": [True, False],\n",
    "    }\n",
    "classifier = RandomForestClassifier()\n",
    "best_params = optimize_hyperparameter_by_random_search(classifier, param_dist, X_train, y_train, X_test, y_test, 20)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 62,
   "metadata": {},
   "outputs": [
    {
     "output_type": "execute_result",
     "data": {
      "text/plain": [
       "{'bootstrap': True,\n",
       " 'criterion': 'gini',\n",
       " 'max_depth': None,\n",
       " 'max_features': 2,\n",
       " 'min_samples_leaf': 1,\n",
       " 'min_samples_split': 6}"
      ]
     },
     "metadata": {},
     "execution_count": 62
    }
   ],
   "source": [
    "best_params"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 61,
   "metadata": {},
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "Score after optimizing the hyperparameters: 1.0\n"
     ]
    }
   ],
   "source": [
    "classifier = RandomForestClassifier()\n",
    "classifier.set_params(**best_params)\n",
    "classifier.fit(X_train, y_train)\n",
    "score, _ = test_predictor(classifier, X_test, y_test)\n",
    "print('Score after optimizing the hyperparameters: {}'.format(score))"
   ]
  }
 ]
}