{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 步骤四：（仅carol）运行可信APP"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "这里我们假设alice和bob已经把第二步加密得到的文件传给了carol（机构之间应该自行通过安全的传输链路进行传输）。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 直接使用breast cancer数据集建模\n",
    "\n",
    "为了对比，我们先尝试直接用前文中提到的breast cancer数据集进行树模型建模（基于XGBoost）。后续我们再尝试使用TrustedFlow还原这个实验，我们会发现两者的效果是一样的。\n",
    "\n",
    "为了运行下列代码，您可能需要准备环境。\n",
    "\n",
    "- python 3.8或者更高版本。\n",
    "- 安装pandas、scikit-learn和xgboost。您可以通过下列命令完成安装。\n",
    "\n",
    "    ```bash\n",
    "    pip install pandas scikit-learn xgboost\n",
    "    ```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 1. 加载数据集\n",
    "\n",
    "我们使用sklearn内置的breast cancer数据集。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import pandas\n",
    "from sklearn.datasets import load_breast_cancer\n",
    "\n",
    "breast_cancer_data = load_breast_cancer(as_frame=True)\n",
    "features = [\"mean radius\",\n",
    "          \"mean texture\",\n",
    "          \"mean perimeter\",\n",
    "          \"mean area\",\n",
    "          \"mean smoothness\",\n",
    "          \"mean compactness\",\n",
    "          \"mean concavity\",\n",
    "          \"mean concave points\",\n",
    "          \"mean symmetry\",\n",
    "          \"mean fractal dimension\"]\n",
    "df = pandas.DataFrame(breast_cancer_data.data, columns=features)\n",
    "df['target'] = breast_cancer_data.target"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "查看数据集，可以看到df拥有10个特征，总样本数是569。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>mean radius</th>\n",
       "      <th>mean texture</th>\n",
       "      <th>mean perimeter</th>\n",
       "      <th>mean area</th>\n",
       "      <th>mean smoothness</th>\n",
       "      <th>mean compactness</th>\n",
       "      <th>mean concavity</th>\n",
       "      <th>mean concave points</th>\n",
       "      <th>mean symmetry</th>\n",
       "      <th>mean fractal dimension</th>\n",
       "      <th>target</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>17.99</td>\n",
       "      <td>10.38</td>\n",
       "      <td>122.80</td>\n",
       "      <td>1001.0</td>\n",
       "      <td>0.11840</td>\n",
       "      <td>0.27760</td>\n",
       "      <td>0.30010</td>\n",
       "      <td>0.14710</td>\n",
       "      <td>0.2419</td>\n",
       "      <td>0.07871</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>20.57</td>\n",
       "      <td>17.77</td>\n",
       "      <td>132.90</td>\n",
       "      <td>1326.0</td>\n",
       "      <td>0.08474</td>\n",
       "      <td>0.07864</td>\n",
       "      <td>0.08690</td>\n",
       "      <td>0.07017</td>\n",
       "      <td>0.1812</td>\n",
       "      <td>0.05667</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>19.69</td>\n",
       "      <td>21.25</td>\n",
       "      <td>130.00</td>\n",
       "      <td>1203.0</td>\n",
       "      <td>0.10960</td>\n",
       "      <td>0.15990</td>\n",
       "      <td>0.19740</td>\n",
       "      <td>0.12790</td>\n",
       "      <td>0.2069</td>\n",
       "      <td>0.05999</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>11.42</td>\n",
       "      <td>20.38</td>\n",
       "      <td>77.58</td>\n",
       "      <td>386.1</td>\n",
       "      <td>0.14250</td>\n",
       "      <td>0.28390</td>\n",
       "      <td>0.24140</td>\n",
       "      <td>0.10520</td>\n",
       "      <td>0.2597</td>\n",
       "      <td>0.09744</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>20.29</td>\n",
       "      <td>14.34</td>\n",
       "      <td>135.10</td>\n",
       "      <td>1297.0</td>\n",
       "      <td>0.10030</td>\n",
       "      <td>0.13280</td>\n",
       "      <td>0.19800</td>\n",
       "      <td>0.10430</td>\n",
       "      <td>0.1809</td>\n",
       "      <td>0.05883</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>...</th>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>564</th>\n",
       "      <td>21.56</td>\n",
       "      <td>22.39</td>\n",
       "      <td>142.00</td>\n",
       "      <td>1479.0</td>\n",
       "      <td>0.11100</td>\n",
       "      <td>0.11590</td>\n",
       "      <td>0.24390</td>\n",
       "      <td>0.13890</td>\n",
       "      <td>0.1726</td>\n",
       "      <td>0.05623</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>565</th>\n",
       "      <td>20.13</td>\n",
       "      <td>28.25</td>\n",
       "      <td>131.20</td>\n",
       "      <td>1261.0</td>\n",
       "      <td>0.09780</td>\n",
       "      <td>0.10340</td>\n",
       "      <td>0.14400</td>\n",
       "      <td>0.09791</td>\n",
       "      <td>0.1752</td>\n",
       "      <td>0.05533</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>566</th>\n",
       "      <td>16.60</td>\n",
       "      <td>28.08</td>\n",
       "      <td>108.30</td>\n",
       "      <td>858.1</td>\n",
       "      <td>0.08455</td>\n",
       "      <td>0.10230</td>\n",
       "      <td>0.09251</td>\n",
       "      <td>0.05302</td>\n",
       "      <td>0.1590</td>\n",
       "      <td>0.05648</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>567</th>\n",
       "      <td>20.60</td>\n",
       "      <td>29.33</td>\n",
       "      <td>140.10</td>\n",
       "      <td>1265.0</td>\n",
       "      <td>0.11780</td>\n",
       "      <td>0.27700</td>\n",
       "      <td>0.35140</td>\n",
       "      <td>0.15200</td>\n",
       "      <td>0.2397</td>\n",
       "      <td>0.07016</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>568</th>\n",
       "      <td>7.76</td>\n",
       "      <td>24.54</td>\n",
       "      <td>47.92</td>\n",
       "      <td>181.0</td>\n",
       "      <td>0.05263</td>\n",
       "      <td>0.04362</td>\n",
       "      <td>0.00000</td>\n",
       "      <td>0.00000</td>\n",
       "      <td>0.1587</td>\n",
       "      <td>0.05884</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "<p>569 rows × 11 columns</p>\n",
       "</div>"
      ],
      "text/plain": [
       "     mean radius  mean texture  mean perimeter  mean area  mean smoothness  \\\n",
       "0          17.99         10.38          122.80     1001.0          0.11840   \n",
       "1          20.57         17.77          132.90     1326.0          0.08474   \n",
       "2          19.69         21.25          130.00     1203.0          0.10960   \n",
       "3          11.42         20.38           77.58      386.1          0.14250   \n",
       "4          20.29         14.34          135.10     1297.0          0.10030   \n",
       "..           ...           ...             ...        ...              ...   \n",
       "564        21.56         22.39          142.00     1479.0          0.11100   \n",
       "565        20.13         28.25          131.20     1261.0          0.09780   \n",
       "566        16.60         28.08          108.30      858.1          0.08455   \n",
       "567        20.60         29.33          140.10     1265.0          0.11780   \n",
       "568         7.76         24.54           47.92      181.0          0.05263   \n",
       "\n",
       "     mean compactness  mean concavity  mean concave points  mean symmetry  \\\n",
       "0             0.27760         0.30010              0.14710         0.2419   \n",
       "1             0.07864         0.08690              0.07017         0.1812   \n",
       "2             0.15990         0.19740              0.12790         0.2069   \n",
       "3             0.28390         0.24140              0.10520         0.2597   \n",
       "4             0.13280         0.19800              0.10430         0.1809   \n",
       "..                ...             ...                  ...            ...   \n",
       "564           0.11590         0.24390              0.13890         0.1726   \n",
       "565           0.10340         0.14400              0.09791         0.1752   \n",
       "566           0.10230         0.09251              0.05302         0.1590   \n",
       "567           0.27700         0.35140              0.15200         0.2397   \n",
       "568           0.04362         0.00000              0.00000         0.1587   \n",
       "\n",
       "     mean fractal dimension  target  \n",
       "0                   0.07871       0  \n",
       "1                   0.05667       0  \n",
       "2                   0.05999       0  \n",
       "3                   0.09744       0  \n",
       "4                   0.05883       0  \n",
       "..                      ...     ...  \n",
       "564                 0.05623       0  \n",
       "565                 0.05533       0  \n",
       "566                 0.05648       0  \n",
       "567                 0.07016       0  \n",
       "568                 0.05884       1  \n",
       "\n",
       "[569 rows x 11 columns]"
      ]
     },
     "execution_count": 2,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "df"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2. 拆分数据集\n",
    "\n",
    "我们把数据集拆分为训练集（80%）和测试集（20%）。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.model_selection import train_test_split\n",
    "\n",
    "dataset_train, dataset_test = train_test_split(\n",
    "        df,\n",
    "        train_size=0.8,\n",
    "        random_state=1024,\n",
    "        shuffle=True,\n",
    ")\n",
    "\n",
    "x_train = dataset_train[features]\n",
    "y_train = dataset_train[\"target\"]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3. 训练模型\n",
    "\n",
    "我们使用XGBoost的XGBClassifier进行模型训练。可以看到训练出来的模型包含100课树（n_estimators=100），树的最大深度为6（max_depth=6）。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[11:46:30] WARNING: ../src/learner.cc:1115: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.\n",
      "XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,\n",
      "              colsample_bynode=1, colsample_bytree=1, enable_categorical=False,\n",
      "              gamma=0, gpu_id=-1, importance_type=None,\n",
      "              interaction_constraints='', learning_rate=0.3, max_bin=10,\n",
      "              max_delta_step=0, max_depth=6, max_leaves=0, min_child_weight=1,\n",
      "              missing=nan, monotone_constraints='()', n_estimators=100,\n",
      "              n_jobs=64, num_parallel_tree=1, predictor='auto', random_state=42,\n",
      "              reg_alpha=0, reg_lambda=1, scale_pos_weight=1, subsample=1,\n",
      "              tree_method='auto', validate_parameters=1, verbosity=None)\n"
     ]
    }
   ],
   "source": [
    "import xgboost as xgb\n",
    "\n",
    "param = {\n",
    "    \"n_estimators\": 100,\n",
    "    \"max_depth\": 6,\n",
    "    \"max_leaves\": 0,\n",
    "    \"random_state\": 42,\n",
    "    \"learning_rate\": 0.3,\n",
    "    \"reg_lambda\": 1,\n",
    "    \"gamma\": 0,\n",
    "    \"colsample_bytree\": 1,\n",
    "    \"base_score\": 0.5,\n",
    "    \"min_child_weight\": 1,\n",
    "    \"reg_alpha\": 0,\n",
    "    \"subsample\": 1,\n",
    "    \"max_bin\": 10,\n",
    "    \"tree_method\": \"auto\",\n",
    "    \"booster\": \"gbtree\"\n",
    "}\n",
    "model = xgb.XGBClassifier(**param, objective=\"binary:logistic\")\n",
    "model.fit(x_train, y_train)\n",
    "print(model)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "我们可以查看特征的重要性。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Feature importance: \n",
      "mean radius: 23.0\n",
      "mean texture: 76.0\n",
      "mean perimeter: 15.0\n",
      "mean area: 70.0\n",
      "mean smoothness: 31.0\n",
      "mean compactness: 48.0\n",
      "mean concavity: 46.0\n",
      "mean concave points: 54.0\n",
      "mean symmetry: 38.0\n",
      "mean fractal dimension: 15.0\n"
     ]
    }
   ],
   "source": [
    "scores = model.get_booster().get_score(importance_type=\"weight\")\n",
    "print(f'Feature importance: ')\n",
    "for feat, score in scores.items():\n",
    "    print(f'{feat}: {score}')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "我们还可以保存并查看模型。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "model.save_model('xgb.json')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{'best_iteration': '99', 'best_ntree_limit': '100', 'scikit_learn': '{\"use_label_encoder\": true, \"n_estimators\": 100, \"objective\": \"binary:logistic\", \"max_depth\": 6, \"learning_rate\": 0.3, \"verbosity\": null, \"booster\": \"gbtree\", \"tree_method\": \"auto\", \"gamma\": 0, \"min_child_weight\": 1, \"max_delta_step\": null, \"subsample\": 1, \"colsample_bytree\": 1, \"colsample_bylevel\": null, \"colsample_bynode\": null, \"reg_alpha\": 0, \"reg_lambda\": 1, \"scale_pos_weight\": null, \"base_score\": 0.5, \"missing\": NaN, \"num_parallel_tree\": null, \"random_state\": 42, \"n_jobs\": null, \"monotone_constraints\": null, \"interaction_constraints\": null, \"importance_type\": null, \"gpu_id\": null, \"validate_parameters\": null, \"predictor\": null, \"enable_categorical\": false, \"kwargs\": {\"max_leaves\": 0, \"max_bin\": 10}, \"classes_\": [0, 1], \"n_classes_\": 2, \"_le\": {\"classes_\": [0, 1]}, \"_estimator_type\": \"classifier\"}'}\n",
      "{'num_trees': '100', 'size_leaf_vector': '0'}\n"
     ]
    }
   ],
   "source": [
    "import json\n",
    "\n",
    "with open('xgb.json') as f:\n",
    "    model_content = json.loads(f.read())\n",
    "print(model_content['learner']['attributes'])\n",
    "print(model_content['learner']['gradient_booster']['model']['gbtree_model_param'])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 4. 模型预测\n",
    "\n",
    "我们使用模型对测试集进行预测。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[9.9944550e-01 2.1459084e-04 9.9984765e-01 9.9749255e-01 9.9647528e-01\n",
      " 9.9969363e-01 9.9941480e-01 6.3684699e-04 9.7209483e-01 9.9473464e-01\n",
      " 1.6540854e-03 2.3896262e-04 3.8317341e-04 9.9985218e-01 5.9765935e-01\n",
      " 4.5679703e-01 9.9990678e-01 9.9769336e-01 2.7916208e-03 3.7675821e-03\n",
      " 1.9103836e-04 9.9960858e-01 9.9962080e-01 9.9627030e-01 9.9977034e-01\n",
      " 9.9947768e-01 9.9964762e-01 1.2528183e-04 1.0330486e-01 5.0118339e-01\n",
      " 9.9775106e-01 9.9890268e-01 1.9544033e-04 9.9796677e-01 7.0530081e-01\n",
      " 9.9935633e-01 9.9767107e-01 1.8424608e-02 1.6954618e-04 2.7818105e-04\n",
      " 9.7751975e-01 9.9992776e-01 1.0048379e-01 9.9963486e-01 9.9878162e-01\n",
      " 3.8418157e-03 9.7013623e-01 9.9922204e-01 5.3013045e-01 9.9950826e-01\n",
      " 9.9253851e-01 9.9937493e-01 9.9905115e-01 9.9105459e-01 9.9971515e-01\n",
      " 9.9880552e-01 1.4384058e-03 6.0805246e-02 9.9963820e-01 9.9979657e-01\n",
      " 3.3138663e-01 9.9796212e-01 9.9971658e-01 9.9938118e-01 9.9913353e-01\n",
      " 1.5776281e-04 9.9951375e-01 9.9943417e-01 7.0775865e-04 9.9984276e-01\n",
      " 9.9962080e-01 9.9971837e-01 9.9354666e-01 7.0599091e-01 9.9972600e-01\n",
      " 9.9761689e-01 9.9931610e-01 9.9896085e-01 9.9977821e-01 7.8626734e-04\n",
      " 7.4145687e-03 4.9908675e-02 9.9551517e-01 9.9991488e-01 9.9699879e-01\n",
      " 9.9985933e-01 9.9922681e-01 9.9904543e-01 9.7297430e-01 9.9951780e-01\n",
      " 2.0591648e-04 9.5256639e-01 9.9960631e-01 9.9651462e-01 1.8664867e-04\n",
      " 1.6123910e-03 1.4846998e-02 7.9491258e-01 2.5027551e-04 7.5842893e-01\n",
      " 9.9955493e-01 9.9915183e-01 9.9892813e-01 2.4820265e-04 2.2480548e-04\n",
      " 9.9935180e-01 4.2976461e-02 9.9985719e-01 1.2770237e-03 9.9991488e-01\n",
      " 9.9252748e-01 9.8064673e-01 9.9844426e-01 9.9706692e-01]\n",
      "[1 0 1 1 1 1 1 0 1 1 0 0 0 1 1 0 1 1 0 0 0 1 1 1 1 1 1 0 0 1 1 1 0 1 1 1 1\n",
      " 0 0 0 1 1 0 1 1 0 1 1 1 1 1 1 1 1 1 1 0 0 1 1 0 1 1 1 1 0 1 1 0 1 1 1 1 1\n",
      " 1 1 1 1 1 0 0 0 1 1 1 1 1 1 1 1 0 1 1 1 0 0 0 1 0 1 1 1 1 0 0 1 0 1 0 1 1\n",
      " 1 1 1]\n"
     ]
    }
   ],
   "source": [
    "import numpy as np\n",
    "\n",
    "x_test = dataset_test[features]\n",
    "y_test = dataset_test[\"target\"].to_numpy()\n",
    "y_score = model.predict_proba(x_test)[:, 1]\n",
    "y_pred = np.array([(1 if x >= 0.5 else 0) for x in y_score])\n",
    "\n",
    "print(y_score)\n",
    "print(y_pred)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 5. 评估预测结果\n",
    "\n",
    "我们对预测结果进行评估，下列代码中计算了预测结果的AUC（Area Under Curve）、KS（Kolmogorov-Smirnov）和F1分数（F1 Score）。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "auc: 0.9796419796419795\n",
      "f1: 0.9426751592356688\n",
      "ks: 0.855036855036855\n"
     ]
    }
   ],
   "source": [
    "from sklearn import metrics\n",
    "\n",
    "auc = metrics.roc_auc_score(y_test, y_score)\n",
    "f1 = metrics.f1_score(y_test, y_pred)\n",
    "fprs, tprs, _ = metrics.roc_curve(y_test, y_score)\n",
    "ks = max(tprs - fprs)\n",
    "print(f'auc: {auc}')\n",
    "print(f'f1: {f1}')\n",
    "print(f'ks: {ks}')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 使用TrustedFlow复现breast cancer建模\n",
    "\n",
    "TrustedFlow提供了多种可信APP，详细列表参见[可信APP](../architecture/apps/index.rst)。\n",
    "\n",
    "上一步我们尝试了直接使用明文数据breast cancer进行树模型建模，接下来我们将使用TrustedFlow复现上述实验。\n",
    "\n",
    "为了复现上述实验，相对应的我们需要执行5个可信APP，分别是数据求交、数据集拆分、XGBoost训练、预测、二分类评估。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 选项一：仿真模式"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 1. 启动可信APP容器  \n",
    "\n",
    "```bash\n",
    "docker run -it --name teeapps-sim --network=host secretflow/teeapps-sim-ubuntu20.04:latest bash\n",
    "```\n",
    "\n",
    "#### 2. 把alice和bob的加密数据文件放入容器内\n",
    "在**宿主机**上执行下列命令。\n",
    "```bash\n",
    "docker cp alice.csv.enc teeapps-sim:/host/testdata/breast_cancer/alice.csv.enc\n",
    "docker cp bob.csv.enc teeapps-sim:/host/testdata/breast_cancer/bob.csv.enc\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "#### 3. 配置并生成任务执行配置文件\n",
    "\n",
    "进入可信APP容器的/host/integration_test目录，目录的文件列表如下：\n",
    "```bash\n",
    ".\n",
    "|-- README.md\n",
    "|-- biclassification_eval.json\n",
    "|-- convert.py\n",
    "|-- psi.json\n",
    "|-- requirement.txt\n",
    "|-- train_test_split.json\n",
    "|-- xgb.json\n",
    "`-- xgb_predict.json\n",
    "```\n",
    "\n",
    "可以看到已经预置了5个APP的任务配置文件，这些配置文件描述了APP执行的信息。\n",
    "\n",
    "- psi.json: 数据求交APP，详细说明可以参见[数据求交](../architecture/apps/intersect.md)。您需要修改以下内容：\n",
    "    - `\"uri\": \"file://input/?id=alice_uuid&&uri=/host/testdata/breast_cancer/alice.csv.enc\"`: 您需要修改`id=alice_uuid`为`id=breast_cancer_alice`，因为在步骤二中我们给alice的数据取名为`breast_cancer_alice`。\n",
    "    - `\"uri\": \"file://input/?id=bob_uuid&&uri=/host/testdata/breast_cancer/bob.csv.enc\"`: 您需要修改`id=bob_uuid`为`id=breast_cancer_bob`，因为在步骤二中我们给bob的数据取名为`breast_cancer_bob`。\n",
    "- train_test_split.json: 数据集拆分APP，详细说明可以参见[数据集拆分](../architecture/apps/split.md)。\n",
    "- xgb.json: XGBoost训练APP，详细说明可以参见[XGBoost训练](../architecture/apps/xgb_train.md)。\n",
    "- xgb_predict.json: XGBoost预测APP，详细说明可以参见[XGBoost预测](../architecture/apps/xgb_predict.md)。\n",
    "- biclassification_eval.json：二分类评估APP，详细说明可以参见[二分类评估](../architecture/apps/binary_evaluation.md)。\n",
    "\n",
    "在正式运行APP之前，carol需要对任务配置文件进行签名。\n",
    "\n",
    "**为什么要进行签名？因为前面alice和bob只授权了carol对数据进行计算，所以carol需要通过签名的方式向可信APP证明其身份，可信APP只有在确认计算发起人身份与授权策略一致时才会执行。**\n",
    "\n",
    "签名的方法如下：\n",
    "\n",
    "(a) carol把自己的私钥和证书拷贝到容器中\n",
    "\n",
    "在**宿主机**上执行下列命令。\n",
    "```bash\n",
    "docker cp carol.key teeapps-sim:/host/integration_test/\n",
    "docker cp carol.crt teeapps-sim:/host/integration_test/\n",
    "```\n",
    "\n",
    "(b) 对任务配置文件进行签名\n",
    "\n",
    "下列命令的作用是对psi.json进行签名，您还需要对train_test_split.json、xgb.json、xgb_predict.json和biclassification_eval.json进行同样的操作。\n",
    "\n",
    "命令说明：\n",
    "\n",
    "- `capsule_manager_endpoint`请填写实际CapsuleManager的服务地址。\n",
    "- `tee_task_config_path`是签名后的文件（本例中叫做`psi_task.json`）\n",
    "\n",
    "```bash\n",
    "pip install -r requirement.txt\n",
    "python convert.py --cert_path carol.crt --prikey_path carol.key --task_config_path psi.json --scope default --capsule_manager_endpoint {CapulseManager的地址} --tee_task_config_path psi_task.json\n",
    "```\n",
    "\n",
    "假设carol签名后得到的任务执行配置文件分别为\n",
    "```bash\n",
    "|-- biclassification_eval_task.json\n",
    "|-- psi_task.json\n",
    "|-- train_test_split_task.json\n",
    "|-- xgb_predict_task.json\n",
    "`-- xgb_task.json\n",
    "```\n",
    "\n",
    "(c) （可选）检查生成的任务执行配置文件内容\n",
    "\n",
    "如果一切顺利，您将会得到形如以下例子的任务执行配置文件，文件说明如下。\n",
    "\n",
    "- `task_initiator_id`：表示carol的机构ID。\n",
    "- `task_initiator_certs`： carol的证书。\n",
    "- `task_body`：原任务配置文件的内容进行BASE64编码后的结果。\n",
    "- `signature`：对task_body的签名，并对签名结果进行BASE64编码。\n",
    "\n",
    "```json\n",
    "{\n",
    "  \"task_input_config\": {\n",
    "    \"tee_task_config\": {\n",
    "      \"task_initiator_id\": \"xxx\",\n",
    "      \"task_initiator_certs\": [\n",
    "        \"-----BEGIN CERTIFICATE-----\\nxxxx\\n-----END CERTIFICATE-----\\n\"\n",
    "      ],\n",
    "      \"scope\": \"default\",\n",
    "      \"task_body\": \"xxx\",\n",
    "      \"signature\": \"xxx\",\n",
    "      \"sign_algorithm\": \"RS256\",\n",
    "      \"capsule_manager_endpoint\": \"xxxx\"\n",
    "    }\n",
    "  }\n",
    "}\n",
    "```\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 4. 执行可信APP\n",
    "\n",
    "**开启mtls**\n",
    "\n",
    "如果CapsuleManager开启了mtls，那么可信APP需要配置证书并开启tls选项。\n",
    "您需要将ca证书、客户端证书、客户端私钥拷贝到以下路径，注意目标路径以及目标文件名需要与以下命令中一致。\n",
    "```bash\n",
    "docker cp ca.crt teeapps-sim:/host/certs/ca.crt\n",
    "\n",
    "docker cp client.crt teeapps-sim:/host/certs/client.crt\n",
    "\n",
    "docker cp client.key teeapps-sim:/host/certs/client.key\n",
    "```\n",
    "并且将后续APP执行命令中的`--enable_capsule_tls=false` 改成 `--enable_capsule_tls=true`。\n",
    "\n",
    "\n",
    "(a) 数据求交\n",
    "\n",
    "在可信APP容器中执行以下命令进行求交：\n",
    "\n",
    "```bash\n",
    "cd /home/teeapp/sim/teeapps\n",
    "./main --plat=sim --enable_console_logger=true --enable_capsule_tls=false --entry_task_config_path=/host/integration_test/psi_task.json\n",
    "```\n",
    "\n",
    "求交结果是加密的，对应的加密文件位于/host/testdata/breast_cancer/join_table（您可以在psi.json中找到对应的配置项）。\n",
    "\n",
    "（可选）如果您感兴趣，可以按照步骤五导出数据的说明导出该密文文件。，您获得的明文结果会是一张包含一列id、10列特征以及1列标签值，包含569个样本。您会发现内容和前面提到的通过sklearn下载得到的breast_cancer数据集内容是一样的。\n",
    "\n",
    "```bash\n",
    "id,mean radius,mean texture,mean perimeter,mean area,mean smoothness,mean compactness,mean concavity,mean concave points,mean symmetry,mean fractal dimension,target\n",
    "842302,17.99,10.38,122.8,1001.0,0.1184,0.2776,0.3001,0.1471,0.2419,0.07871,0\n",
    "842517,20.57,17.77,132.9,1326.0,0.08474,0.07864,0.0869,0.07017,0.1812,0.05667,0\n",
    "84300903,19.69,21.25,130.0,1203.0,0.1096,0.1599,0.1974,0.1279,0.2069,0.05999,0\n",
    "84348301,11.42,20.38,77.58,386.1,0.1425,0.2839,0.2414,0.1052,0.2597,0.09744,0\n",
    "84358402,20.29,14.34,135.1,1297.0,0.1003,0.1328,0.198,0.1043,0.1809,0.05883,0\n",
    "843786,12.45,15.7,82.57,477.1,0.1278,0.17,0.1578,0.08089,0.2087,0.07613,0\n",
    "844359,18.25,19.98,119.6,1040.0,0.09463,0.109,0.1127,0.074,0.1794,0.05742,0\n",
    "84458202,13.71,20.83,90.2,577.9,0.1189,0.1645,0.09366,0.05985,0.2196,0.07451,0\n",
    "844981,13.0,21.82,87.5,519.8,0.1273,0.1932,0.1859,0.09353,0.235,0.07389,0\n",
    "...\n",
    "```\n",
    "\n",
    "(b) 拆分数据集\n",
    "\n",
    "继续执行命令。拆分后得到训练（80%）和测试（20%）两份数据集，存放位置为/host/testdata/breast_cancer/train_table和/host/testdata/breast_cancer/test_table。\n",
    "\n",
    "（可选）同样地，您同样可以按照步骤五中的方法获取数据密钥对其进行解密，与明文拆分的结果进行比较，两者预期是一致的。\n",
    "\n",
    "```\n",
    "./main --plat=sim --enable_console_logger=true --enable_capsule_tls=false --entry_task_config_path=/host/integration_test/train_test_split_task.json\n",
    "```\n",
    "\n",
    "(c) XGBoost训练\n",
    "\n",
    "继续执行命令。计算结果为一个加密的xgb树模型，存放位置为/host/testdata/breast_cancer/xgb_model。\n",
    "\n",
    "```bash\n",
    "./main --plat=sim --enable_console_logger=true --enable_capsule_tls=false --entry_task_config_path=/host/integration_test/xgb_task.json\n",
    "```\n",
    "\n",
    "（可选）同样地，您同样可以按照步骤五中的方法对模型进行解密，并参考前面直接明文建模的代码对模型进行查看，两者预期是一致的。\n",
    "\n",
    "(d) XGBoost预测\n",
    "\n",
    "继续执行命令。计算结果为预测结果，包含以下列：score-预测结果、label-原始的标签、id：样本ID，文件存放位置为/host/testdata/breast_cancer/xgb_model。\n",
    "\n",
    "```bash\n",
    "./main --plat=sim --enable_console_logger=true --enable_capsule_tls=false --entry_task_config_path=/host/integration_test/xgb_predict_task.json\n",
    "```\n",
    "\n",
    "（可选）同样地，您同样可以按照步骤五中的方法对预测结果进行解密，您将得到以下结果，与明文预测得到的结果是一致的。\n",
    "```bash\n",
    "score,label,id\n",
    "0.999446,True,8911834\n",
    "0.000215,False,8811842\n",
    "0.999848,True,911408\n",
    "0.997493,True,909220\n",
    "0.996475,True,862261\n",
    "0.999694,True,89511502\n",
    "0.999415,True,871149\n",
    "0.000637,False,9113538\n",
    "0.972095,True,925277\n",
    "0.994735,True,88249602\n",
    "...\n",
    "```\n",
    "\n",
    "(e) 二分类评估\n",
    "\n",
    "继续执行命令，对预测结果进行评估。\n",
    "\n",
    "```bash\n",
    "./main --plat=sim --enable_console_logger=true --enable_capsule_tls=false --entry_task_config_path=/host/integration_test/biclassification_eval_task.json\n",
    "```\n",
    "\n",
    "输出结果为一个二分类评估的结果，二分类结果为明文形式，内容包含：\n",
    "\n",
    "- summary_report: 总结报告，包含total_samples、positive_samples、negative_samples、auc、ks和f1_score\n",
    "- eq_frequent_bin_report: 等频分箱报告\n",
    "- eq_range_bin_report: 等距分箱报告\n",
    "- head_report: FPR = 0.001， 0.005， 0.01， 0.05， 0.1， 0.2 的精度报告，包含fpr、precision、recall和threshold\n",
    "\n",
    "您可以看到auc、f1和ks值与直接使用breast cancer数据集建模一致。\n",
    "\n",
    "\n",
    "部分内容展示如下：\n",
    "```json\n",
    "{\n",
    "  \"name\": \"reports\",\n",
    "  \"tabs\": [\n",
    "    {\n",
    "      \"name\": \"SummaryReport\",\n",
    "      \"desc\": \"Summary Report for bi-classification evaluation.\",\n",
    "      \"divs\": [\n",
    "        {\n",
    "          \"children\": [\n",
    "            {\n",
    "              \"type\": \"descriptions\",\n",
    "              \"descriptions\": {\n",
    "                \"items\": [\n",
    "                  {\n",
    "                    \"name\": \"auc\",\n",
    "                    \"type\": \"float\",\n",
    "                    \"value\": {\n",
    "                      \"f\": 0.979642\n",
    "                    }\n",
    "                  },\n",
    "                  {\n",
    "                    \"name\": \"ks\",\n",
    "                    \"type\": \"float\",\n",
    "                    \"value\": {\n",
    "                      \"f\": 0.85503685\n",
    "                    }\n",
    "                  },\n",
    "                  {\n",
    "                    \"name\": \"f1_score\",\n",
    "                    \"type\": \"float\",\n",
    "                    \"value\": {\n",
    "                      \"f\": 0.9426752\n",
    "                    }\n",
    "                  }\n",
    "                ]\n",
    "              }\n",
    "            }\n",
    "          ]\n",
    "        }\n",
    "      ]\n",
    "    }\n",
    "  ]\n",
    "}\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 选项二：SGX模式\n",
    "\n",
    "若您使用SGX环境执行可信APP，可以按照下列说明进行。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 1. 启动可信APP容器 \n",
    "\n",
    "```bash\n",
    "docker run -it --name teeapps-sgx --network=host -v /dev/sgx_enclave:/dev/sgx/enclave -v /dev/sgx_provision:/dev/sgx/provision --privileged=true secretflow/teeapps-sgx-ubuntu20.04:latest bash\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 2. 修改 PCCS 配置\n",
    "\n",
    "> 提示：如果您还没有PCCS服务，则可以参考[部署PCCS](../architecture/tee/sgx.md#如何部署pccs服务)。\n",
    "\n",
    "\n",
    "1. 修改PCCS的配置文件/etc/sgx_default_qcnl.conf，把PCCS_URL配置为PCCS的实际部署服务地址。\n",
    "\n",
    "```bash\n",
    "# PCCS server address\n",
    "\"pccs_url\": \"https://localhost:8081/sgx/certification/v4/\"\n",
    "\n",
    "# To accept insecure HTTPS certificate, set this option to FALSE\n",
    "\"use_secure_cert\": false\n",
    "\n",
    "```\n",
    "\n",
    "2. 修改occlum_release/image/etc/kubetee/unified_attestation.json，将ua_dcap_pccs_url配置为实际的PCCS服务地址。\n",
    "\n",
    "```json\n",
    "{\n",
    "    \"ua_ias_url\": \"\",\n",
    "    \"ua_ias_spid\": \"\",\n",
    "    \"ua_ias_apk_key\": \"\",\n",
    "    \"ua_dcap_lib_path\": \"\",\n",
    "    \"ua_dcap_pccs_url\": \"https://localhost:8081/sgx/certification/v3/\",\n",
    "    \"ua_uas_url\": \"\",\n",
    "    \"ua_uas_app_key\": \"\",\n",
    "    \"ua_uas_app_secret\": \"\"\n",
    "}\n",
    "```\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 3. 生成私钥后，使用私钥进行build。\n",
    "\n",
    "您首先需要生成私钥，该私钥仅用于构建occlum，然后使用以下命令构建occlum。生成私钥可以参考下列脚本，生成的私钥保存在当前目录的private_key.pem。请妥善保存您的私钥，不要泄露给其他人。\n",
    "\n",
    "```bash\n",
    "openssl genrsa -3 -out private_key.pem 3072\n",
    "```\n",
    "\n",
    "生成公私钥后，使用私钥构建occlum。\n",
    "\n",
    "```bash\n",
    "occlum build -f --sign-key private_key.pem\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 4. 把alice和bob的加密数据文件放入容器内\n",
    "\n",
    "在**宿主机**上执行下列命令。\n",
    "\n",
    "```bash\n",
    "docker cp alice.csv.enc teeapps-sgx:/home/teeapp/occlum/occlum_instance/testdata/breast_cancer/\n",
    "docker cp bob.csv.enc teeapps-sgx:/home/teeapp/occlum/occlum_instance/testdata/breast_cancer/\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 4. 生成任务执行配置文件\n",
    "\n",
    "进入可信APP容器的/host/integration_test目录，目录的文件列表如下：\n",
    "```bash\n",
    ".\n",
    "|-- README.md\n",
    "|-- biclassification_eval.json\n",
    "|-- convert.py\n",
    "|-- psi.json\n",
    "|-- requirement.txt\n",
    "|-- train_test_split.json\n",
    "|-- xgb.json\n",
    "`-- xgb_predict.json\n",
    "```\n",
    "\n",
    "可以看到已经预置了5个APP的任务配置文件，这些配置文件描述了APP执行的信息。\n",
    "\n",
    "- psi.json: 数据求交APP，详细说明可以参见[数据求交](../architecture/apps/intersect.md)。您需要修改以下内容：\n",
    "    - `\"uri\": \"file://input/?id=alice_uuid&&uri=/host/testdata/breast_cancer/alice.csv.enc\"`: 您需要修改`id=alice_uuid`为`id=breast_cancer_alice`，因为在步骤二中我们给alice的数据取名为`breast_cancer_alice`。\n",
    "    - `\"uri\": \"file://input/?id=bob_uuid&&uri=/host/testdata/breast_cancer/bob.csv.enc\"`: 您需要修改`id=bob_uuid`为`id=breast_cancer_bob`，因为在步骤二中我们给bob的数据取名为`breast_cancer_bob`。\n",
    "- train_test_split.json: 数据集拆分APP，详细说明可以参见[数据集拆分](../architecture/apps/split.md)。\n",
    "- xgb.json: XGBoost训练APP，详细说明可以参见[XGBoost训练](../architecture/apps/xgb_train.md)。\n",
    "- xgb_predict.json: XGBoost预测APP，详细说明可以参见[XGBoost预测](../architecture/apps/xgb_predict.md)。\n",
    "- biclassification_eval.json：二分类评估APP，详细说明可以参见[二分类评估](../architecture/apps/binary_evaluation.md)。\n",
    "\n",
    "在正式运行APP之前，carol需要对任务配置文件进行签名。\n",
    "\n",
    "**为什么要进行签名？因为前面alice和bob只授权了carol对数据进行计算，所以carol需要通过签名的方式向可信APP证明其身份，可信APP只有在确认计算发起人身份与授权策略一致时才会执行。**\n",
    "\n",
    "签名的方法如下：\n",
    "\n",
    "(a) carol把自己的私钥和证书拷贝到容器中\n",
    "\n",
    "在**宿主机**上执行下列命令。\n",
    "```bash\n",
    "docker cp carol.key teeapps-sgx:/home/teeapp/occlum/occlum_instance/integration_test/\n",
    "docker cp carol.crt teeapps-sgx:/home/teeapp/occlum/occlum_instance/integration_test/\n",
    "```\n",
    "\n",
    "(b) 对任务配置文件进行签名\n",
    "\n",
    "下列命令的作用是对psi.json进行签名，您还需要对train_test_split.json、xgb.json、xgb_predict.json和biclassification_eval.json进行同样的操作。\n",
    "\n",
    "命令说明：\n",
    "\n",
    "- `capsule_manager_endpoint`请填写实际CapsuleManager的服务地址。\n",
    "- `tee_task_config_path`是签名后的文件（本例中叫做`psi_task.json`）\n",
    "\n",
    "```bash\n",
    "pip install -r requirement.txt\n",
    "python convert.py --cert_path carol.crt --prikey_path carol.key --task_config_path psi.json --scope default --capsule_manager_endpoint {CapulseManager的地址} --tee_task_config_path psi_task.json\n",
    "```\n",
    "\n",
    "假设carol签名后得到的任务执行配置文件分别为\n",
    "```bash\n",
    "|-- biclassification_eval_task.json\n",
    "|-- psi_task.json\n",
    "|-- train_test_split_task.json\n",
    "|-- xgb_predict_task.json\n",
    "`-- xgb_task.json\n",
    "```\n",
    "\n",
    "(c) （可选）检查生成的任务执行配置文件内容\n",
    "\n",
    "如果一切顺利，您将会得到形如以下例子的任务执行配置文件，文件说明如下。\n",
    "\n",
    "- `task_initiator_id`：表示carol的机构ID。\n",
    "- `task_initiator_certs`： carol的证书。\n",
    "- `task_body`：原任务配置文件的内容进行BASE64编码后的结果。\n",
    "- `signature`：对task_body的签名，并对签名结果进行BASE64编码。\n",
    "\n",
    "```json\n",
    "{\n",
    "  \"task_input_config\": {\n",
    "    \"tee_task_config\": {\n",
    "      \"task_initiator_id\": \"xxx\",\n",
    "      \"task_initiator_certs\": [\n",
    "        \"-----BEGIN CERTIFICATE-----\\nxxxx\\n-----END CERTIFICATE-----\\n\"\n",
    "      ],\n",
    "      \"scope\": \"default\",\n",
    "      \"task_body\": \"xxx\",\n",
    "      \"signature\": \"xxx\",\n",
    "      \"sign_algorithm\": \"RS256\",\n",
    "      \"capsule_manager_endpoint\": \"xxxx\"\n",
    "    }\n",
    "  }\n",
    "}\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 46. 执行可信APP\n",
    "\n",
    "**开启mtls**\n",
    "\n",
    "如果CapsuleManager开启了mtls，那么可信APP需要配置证书并开启tls选项。\n",
    "您需要将ca证书、客户端证书、客户端私钥拷贝到以下路径，注意目标路径以及目标文件名需要与以下命令中一致。\n",
    "```bash\n",
    "docker cp ca.crt teeapps-sgx:/home/teeapp/occlum/occlum_instance/certs/ca.crt\n",
    "\n",
    "docker cp client.crt teeapps-sgx:/home/teeapp/occlum/occlum_instance/certs/client.crt\n",
    "\n",
    "docker cp client.key teeapps-sgx:/home/teeapp/occlum/occlum_instance/certs/client.key\n",
    "```\n",
    "并且将后续APP执行命令中的`--enable_capsule_tls=false` 改成 `--enable_capsule_tls=true`。\n",
    "\n",
    "\n",
    "(a) 数据求交\n",
    "\n",
    "在可信APP容器中执行以下命令进行求交：\n",
    "\n",
    "```bash\n",
    "cd /home/teeapp/occlum/occlum_instance\n",
    "occlum run /bin/main --enable_capsule_tls=false --entry_task_config_path=/host/integration_test/psi_task.json\n",
    "```\n",
    "\n",
    "求交结果是加密的，对应的加密文件位于/host/testdata/breast_cancer/join_table（您可以在psi.json中找到对应的配置项）。注意，由于occlum使用`host`指代其当前运行所在目录，所以文件实际存放在当前目录的`testdata/breast_cancer/join_table`。\n",
    "\n",
    "（可选）如果您感兴趣，可以按照步骤五导出数据的说明导出该密文文件。，您获得的明文结果会是一张包含一列id、10列特征以及1列标签值，包含569个样本。您会发现内容和前面提到的通过sklearn下载得到的breast_cancer数据集内容是一样的。\n",
    "\n",
    "```bash\n",
    "id,mean radius,mean texture,mean perimeter,mean area,mean smoothness,mean compactness,mean concavity,mean concave points,mean symmetry,mean fractal dimension,target\n",
    "842302,17.99,10.38,122.8,1001.0,0.1184,0.2776,0.3001,0.1471,0.2419,0.07871,0\n",
    "842517,20.57,17.77,132.9,1326.0,0.08474,0.07864,0.0869,0.07017,0.1812,0.05667,0\n",
    "84300903,19.69,21.25,130.0,1203.0,0.1096,0.1599,0.1974,0.1279,0.2069,0.05999,0\n",
    "84348301,11.42,20.38,77.58,386.1,0.1425,0.2839,0.2414,0.1052,0.2597,0.09744,0\n",
    "84358402,20.29,14.34,135.1,1297.0,0.1003,0.1328,0.198,0.1043,0.1809,0.05883,0\n",
    "843786,12.45,15.7,82.57,477.1,0.1278,0.17,0.1578,0.08089,0.2087,0.07613,0\n",
    "844359,18.25,19.98,119.6,1040.0,0.09463,0.109,0.1127,0.074,0.1794,0.05742,0\n",
    "84458202,13.71,20.83,90.2,577.9,0.1189,0.1645,0.09366,0.05985,0.2196,0.07451,0\n",
    "844981,13.0,21.82,87.5,519.8,0.1273,0.1932,0.1859,0.09353,0.235,0.07389,0\n",
    "...\n",
    "```\n",
    "\n",
    "(b) 拆分数据集\n",
    "\n",
    "继续执行命令。拆分后得到训练（80%）和测试（20%）两份数据集，存放位置为testdata/breast_cancer/train_table和testdata/breast_cancer/test_table。\n",
    "\n",
    "（可选）同样地，您同样可以按照步骤五中的方法获取数据密钥对其进行解密，与明文拆分的结果进行比较，两者预期是一致的。\n",
    "\n",
    "```\n",
    "occlum run /bin/main --enable_capsule_tls=false --entry_task_config_path=/host/integration_test/train_test_split_task.json\n",
    "```\n",
    "\n",
    "(c) XGBoost训练\n",
    "\n",
    "继续执行命令。计算结果为一个加密的xgb树模型，存放位置为testdata/breast_cancer/xgb_model。\n",
    "\n",
    "```bash\n",
    "occlum run /bin/main --enable_capsule_tls=false --entry_task_config_path=/host/integration_test/xgb_task.json\n",
    "```\n",
    "\n",
    "（可选）同样地，您同样可以按照步骤五中的方法对模型进行解密，并参考前面直接明文建模的代码对模型进行查看，两者预期是一致的。\n",
    "\n",
    "(d) XGBoost预测\n",
    "\n",
    "继续执行命令。计算结果为预测结果，包含以下列：score-预测结果、label-原始的标签、id：样本ID，文件存放位置为testdata/breast_cancer/xgb_model。\n",
    "\n",
    "```bash\n",
    "occlum run /bin/main --enable_capsule_tls=false --entry_task_config_path=/host/integration_test/xgb_predict_task.json\n",
    "```\n",
    "\n",
    "（可选）同样地，您同样可以按照步骤五中的方法对预测结果进行解密，您将得到以下结果，与明文预测得到的结果是一致的。\n",
    "```bash\n",
    "score,label,id\n",
    "0.999446,True,8911834\n",
    "0.000215,False,8811842\n",
    "0.999848,True,911408\n",
    "0.997493,True,909220\n",
    "0.996475,True,862261\n",
    "0.999694,True,89511502\n",
    "0.999415,True,871149\n",
    "0.000637,False,9113538\n",
    "0.972095,True,925277\n",
    "0.994735,True,88249602\n",
    "...\n",
    "```\n",
    "\n",
    "(e) 二分类评估\n",
    "\n",
    "继续执行命令，对预测结果进行评估。\n",
    "\n",
    "```bash\n",
    "occlum run /bin/main --enable_capsule_tls=false --entry_task_config_path=/host/integration_test/biclassification_eval_task.json\n",
    "```\n",
    "\n",
    "输出结果为一个二分类评估的结果，二分类结果为明文形式，内容包含：\n",
    "\n",
    "- summary_report: 总结报告，包含total_samples、positive_samples、negative_samples、auc、ks和f1_score\n",
    "- eq_frequent_bin_report: 等频分箱报告\n",
    "- eq_range_bin_report: 等距分箱报告\n",
    "- head_report: FPR = 0.001， 0.005， 0.01， 0.05， 0.1， 0.2 的精度报告，包含fpr、precision、recall和threshold\n",
    "\n",
    "您可以看到auc、f1和ks值与直接使用breast cancer数据集建模一致。\n",
    "\n",
    "\n",
    "部分内容展示如下：\n",
    "```json\n",
    "{\n",
    "  \"name\": \"reports\",\n",
    "  \"tabs\": [\n",
    "    {\n",
    "      \"name\": \"SummaryReport\",\n",
    "      \"desc\": \"Summary Report for bi-classification evaluation.\",\n",
    "      \"divs\": [\n",
    "        {\n",
    "          \"children\": [\n",
    "            {\n",
    "              \"type\": \"descriptions\",\n",
    "              \"descriptions\": {\n",
    "                \"items\": [\n",
    "                  {\n",
    "                    \"name\": \"auc\",\n",
    "                    \"type\": \"float\",\n",
    "                    \"value\": {\n",
    "                      \"f\": 0.979642\n",
    "                    }\n",
    "                  },\n",
    "                  {\n",
    "                    \"name\": \"ks\",\n",
    "                    \"type\": \"float\",\n",
    "                    \"value\": {\n",
    "                      \"f\": 0.85503685\n",
    "                    }\n",
    "                  },\n",
    "                  {\n",
    "                    \"name\": \"f1_score\",\n",
    "                    \"type\": \"float\",\n",
    "                    \"value\": {\n",
    "                      \"f\": 0.9426752\n",
    "                    }\n",
    "                  }\n",
    "                ]\n",
    "              }\n",
    "            }\n",
    "          ]\n",
    "        }\n",
    "      ]\n",
    "    }\n",
    "  ]\n",
    "}\n",
    "```\n"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "ray",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.13"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
