{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Python机器学习Kaggle案例实战（第21期）第9课书面作业\n",
    "学号：113778  \n",
    "\n",
    "**作业内容：**\n",
    "1. 你对实体嵌入这个技术还可以用于什么场景，说说你的看法。  \n",
    "2. 尝试运行https://github.com/entron/entity-embedding-rossmann 论文中的程序。如果可以，将数据集替换成另外的数据集。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 作业1\n",
    "**你对实体嵌入这个技术还可以用于什么场景，说说你的看法。**"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Embedding 是一个将离散变量转为连续向量表示的一个方式。在神经网络中，embedding 是非常有用的，因为它不光可以减少离散变量的空间维数，同时还可以有意义的表示该变量。\n",
    "\n",
    "我理解embedding 有以下 3 个主要目的：\n",
    "1. 在 embedding 空间中查找最近邻，这可以很好的用于根据用户的兴趣来进行推荐。\n",
    "2. 作为有监督学习任务的输入。\n",
    "3. 用于可视化不同离散变量之间的关系。\n",
    "\n",
    "我们可以对比 One-hot 编码来看一下。One-hot 编码是一种最普通常见的表示离散数据的表示，首先我们计算出需要表示的离散或类别变量的总个数 N，然后对于每个变量，我们就可以用 N-1 个 0 和单个 1 组成的 vector 来表示每个类别。这样做有两个很明显的缺点：  \n",
    "* 对于具有非常多类型的类别变量，变换后的向量维数过于巨大，且过于稀疏。\n",
    "* 映射之间完全独立，并不能表示出不同类别之间的关系。  \n",
    "\n",
    "因此，考虑到这两个问题，表示类别变量的理想解决方案则是我们是否可以通过较少的维度表示出每个类别，并且还可以一定的表现出不同类别变量之间的关系，这也就是 embedding 出现的目的。  \n",
    "Embedding 很好的一个地方在于它们可以用来可视化出表示的数据的相关性，当然要我们能够观察，需要通过降维技术来达到 2 维或 3 维。最流行的降维技术是：t-Distributed Stochastic Neighbor Embedding (TSNE)。  \n",
    "Embedding 的价值并不仅仅在于 word embedding 或者 entity embedding，这种将类别数据用低维表示且可自学习的思想更存在价值。通过这种方式，我们可以将神经网络，深度学习用于更广泛的领域，Embedding 可以表示更多的东西，而这其中的关键在于要想清楚我们需要解决的问题和应用 Embedding 表示我们得到的是什么。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 作业2\n",
    "尝试运行https://github.com/entron/entity-embedding-rossmann 论文中的程序。如果可以，将数据集替换成另外的数据集。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 第1步：运行https://github.com/entron/entity-embedding-rossmann 论文中的程序"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "首先不能用课程资源平台上的entity-embedding-rossmann的程序，要到网站上下载最新的程序运行。因为课程资源平台上的程序已经老了，是针对Keras 2.0之前平台的。目前基本上主流在使用的Keras都是2.0之后的了。  \n",
    "运行起来很简单，我抓了个图，显示如下：\n",
    "![kaggle9](https://gitee.com/dotzhen/cloud-notes/raw/master/20210904223855.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 第2步：将数据集替换成另外的数据集"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 数据集简介\n",
    "采用贷款违约的数据集，数据集形式如下："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2021-09-04T09:01:35.006965Z",
     "iopub.status.busy": "2021-09-04T09:01:35.006635Z",
     "iopub.status.idle": "2021-09-04T09:02:26.361399Z",
     "shell.execute_reply": "2021-09-04T09:02:26.360285Z",
     "shell.execute_reply.started": "2021-09-04T09:01:35.006931Z"
    }
   },
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "import pandas as pd\n",
    "\n",
    "df_train=pd.read_excel('train.xlsx')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2021-09-04T09:02:26.363309Z",
     "iopub.status.busy": "2021-09-04T09:02:26.362993Z",
     "iopub.status.idle": "2021-09-04T09:02:26.475251Z",
     "shell.execute_reply": "2021-09-04T09:02:26.473210Z",
     "shell.execute_reply.started": "2021-09-04T09:02:26.363279Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>ID</th>\n",
       "      <th>Gender</th>\n",
       "      <th>City</th>\n",
       "      <th>Monthly_Income</th>\n",
       "      <th>DOB</th>\n",
       "      <th>Lead_Creation_Date</th>\n",
       "      <th>Loan_Amount_Applied</th>\n",
       "      <th>Loan_Tenure_Applied</th>\n",
       "      <th>Existing_EMI</th>\n",
       "      <th>Employer_Name</th>\n",
       "      <th>...</th>\n",
       "      <th>Interest_Rate</th>\n",
       "      <th>Processing_Fee</th>\n",
       "      <th>EMI_Loan_Submitted</th>\n",
       "      <th>Filled_Form</th>\n",
       "      <th>Device_Type</th>\n",
       "      <th>Var2</th>\n",
       "      <th>Source</th>\n",
       "      <th>Var4</th>\n",
       "      <th>LoggedIn</th>\n",
       "      <th>Disbursed</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>ID000002C20</td>\n",
       "      <td>Female</td>\n",
       "      <td>Delhi</td>\n",
       "      <td>20000</td>\n",
       "      <td>1978-05-23</td>\n",
       "      <td>2015-05-15</td>\n",
       "      <td>300000.0</td>\n",
       "      <td>5.0</td>\n",
       "      <td>0.0</td>\n",
       "      <td>CYBOSOL</td>\n",
       "      <td>...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>N</td>\n",
       "      <td>Web-browser</td>\n",
       "      <td>G</td>\n",
       "      <td>S122</td>\n",
       "      <td>1</td>\n",
       "      <td>0</td>\n",
       "      <td>0.0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>ID000004E40</td>\n",
       "      <td>Male</td>\n",
       "      <td>Mumbai</td>\n",
       "      <td>35000</td>\n",
       "      <td>1985-10-07</td>\n",
       "      <td>2015-05-04</td>\n",
       "      <td>200000.0</td>\n",
       "      <td>2.0</td>\n",
       "      <td>0.0</td>\n",
       "      <td>TATA CONSULTANCY SERVICES LTD (TCS)</td>\n",
       "      <td>...</td>\n",
       "      <td>13.25</td>\n",
       "      <td>NaN</td>\n",
       "      <td>6762.9</td>\n",
       "      <td>N</td>\n",
       "      <td>Web-browser</td>\n",
       "      <td>G</td>\n",
       "      <td>S122</td>\n",
       "      <td>3</td>\n",
       "      <td>0</td>\n",
       "      <td>0.0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>ID000007H20</td>\n",
       "      <td>Male</td>\n",
       "      <td>Panchkula</td>\n",
       "      <td>22500</td>\n",
       "      <td>1981-10-10</td>\n",
       "      <td>2015-05-19</td>\n",
       "      <td>600000.0</td>\n",
       "      <td>4.0</td>\n",
       "      <td>0.0</td>\n",
       "      <td>ALCHEMIST HOSPITALS LTD</td>\n",
       "      <td>...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>N</td>\n",
       "      <td>Web-browser</td>\n",
       "      <td>B</td>\n",
       "      <td>S143</td>\n",
       "      <td>1</td>\n",
       "      <td>0</td>\n",
       "      <td>0.0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>ID000008I30</td>\n",
       "      <td>Male</td>\n",
       "      <td>Saharsa</td>\n",
       "      <td>35000</td>\n",
       "      <td>1987-11-30</td>\n",
       "      <td>2015-05-09</td>\n",
       "      <td>1000000.0</td>\n",
       "      <td>5.0</td>\n",
       "      <td>0.0</td>\n",
       "      <td>BIHAR GOVERNMENT</td>\n",
       "      <td>...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>N</td>\n",
       "      <td>Web-browser</td>\n",
       "      <td>B</td>\n",
       "      <td>S143</td>\n",
       "      <td>3</td>\n",
       "      <td>0</td>\n",
       "      <td>0.0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>ID000009J40</td>\n",
       "      <td>Male</td>\n",
       "      <td>Bengaluru</td>\n",
       "      <td>100000</td>\n",
       "      <td>1984-02-17</td>\n",
       "      <td>2015-05-20</td>\n",
       "      <td>500000.0</td>\n",
       "      <td>2.0</td>\n",
       "      <td>25000.0</td>\n",
       "      <td>GLOBAL EDGE SOFTWARE</td>\n",
       "      <td>...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>N</td>\n",
       "      <td>Web-browser</td>\n",
       "      <td>B</td>\n",
       "      <td>S134</td>\n",
       "      <td>3</td>\n",
       "      <td>1</td>\n",
       "      <td>0.0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>...</th>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>87015</th>\n",
       "      <td>ID124813N30</td>\n",
       "      <td>Female</td>\n",
       "      <td>Ajmer</td>\n",
       "      <td>71901</td>\n",
       "      <td>1969-11-27</td>\n",
       "      <td>2015-07-31</td>\n",
       "      <td>1000000.0</td>\n",
       "      <td>5.0</td>\n",
       "      <td>14500.0</td>\n",
       "      <td>MAYO COLLEGE</td>\n",
       "      <td>...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>N</td>\n",
       "      <td>Web-browser</td>\n",
       "      <td>G</td>\n",
       "      <td>S122</td>\n",
       "      <td>3</td>\n",
       "      <td>0</td>\n",
       "      <td>0.0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>87016</th>\n",
       "      <td>ID124814O40</td>\n",
       "      <td>Female</td>\n",
       "      <td>Kochi</td>\n",
       "      <td>16000</td>\n",
       "      <td>1990-12-01</td>\n",
       "      <td>2015-07-31</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.0</td>\n",
       "      <td>KERALA COMMUNICATORS CABLE LTD</td>\n",
       "      <td>...</td>\n",
       "      <td>35.50</td>\n",
       "      <td>4800.0</td>\n",
       "      <td>9425.76</td>\n",
       "      <td>Y</td>\n",
       "      <td>Mobile</td>\n",
       "      <td>G</td>\n",
       "      <td>S122</td>\n",
       "      <td>5</td>\n",
       "      <td>0</td>\n",
       "      <td>0.0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>87017</th>\n",
       "      <td>ID124816Q10</td>\n",
       "      <td>Male</td>\n",
       "      <td>Bengaluru</td>\n",
       "      <td>118000</td>\n",
       "      <td>1972-01-28</td>\n",
       "      <td>2015-07-31</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.0</td>\n",
       "      <td>BANGALORE INSTITUTE OF TECHNOLOGY</td>\n",
       "      <td>...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>N</td>\n",
       "      <td>Mobile</td>\n",
       "      <td>G</td>\n",
       "      <td>S122</td>\n",
       "      <td>3</td>\n",
       "      <td>0</td>\n",
       "      <td>0.0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>87018</th>\n",
       "      <td>ID124818S30</td>\n",
       "      <td>Male</td>\n",
       "      <td>Bengaluru</td>\n",
       "      <td>98930</td>\n",
       "      <td>1977-04-27</td>\n",
       "      <td>2015-07-31</td>\n",
       "      <td>800000.0</td>\n",
       "      <td>5.0</td>\n",
       "      <td>13660.0</td>\n",
       "      <td>FIRSTSOURCE SOLUTION LTD</td>\n",
       "      <td>...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>N</td>\n",
       "      <td>Web-browser</td>\n",
       "      <td>G</td>\n",
       "      <td>S122</td>\n",
       "      <td>3</td>\n",
       "      <td>0</td>\n",
       "      <td>0.0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>87019</th>\n",
       "      <td>ID124821V10</td>\n",
       "      <td>Male</td>\n",
       "      <td>Mumbai</td>\n",
       "      <td>42300</td>\n",
       "      <td>1988-10-31</td>\n",
       "      <td>2015-07-31</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.0</td>\n",
       "      <td>GOVERNMENT OF INDIA</td>\n",
       "      <td>...</td>\n",
       "      <td>13.99</td>\n",
       "      <td>3450.0</td>\n",
       "      <td>18851.81</td>\n",
       "      <td>N</td>\n",
       "      <td>Web-browser</td>\n",
       "      <td>G</td>\n",
       "      <td>S122</td>\n",
       "      <td>4</td>\n",
       "      <td>0</td>\n",
       "      <td>0.0</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "<p>87020 rows × 26 columns</p>\n",
       "</div>"
      ],
      "text/plain": [
       "                ID  Gender       City  Monthly_Income        DOB  \\\n",
       "0      ID000002C20  Female      Delhi           20000 1978-05-23   \n",
       "1      ID000004E40    Male     Mumbai           35000 1985-10-07   \n",
       "2      ID000007H20    Male  Panchkula           22500 1981-10-10   \n",
       "3      ID000008I30    Male    Saharsa           35000 1987-11-30   \n",
       "4      ID000009J40    Male  Bengaluru          100000 1984-02-17   \n",
       "...            ...     ...        ...             ...        ...   \n",
       "87015  ID124813N30  Female      Ajmer           71901 1969-11-27   \n",
       "87016  ID124814O40  Female      Kochi           16000 1990-12-01   \n",
       "87017  ID124816Q10    Male  Bengaluru          118000 1972-01-28   \n",
       "87018  ID124818S30    Male  Bengaluru           98930 1977-04-27   \n",
       "87019  ID124821V10    Male     Mumbai           42300 1988-10-31   \n",
       "\n",
       "      Lead_Creation_Date  Loan_Amount_Applied  Loan_Tenure_Applied  \\\n",
       "0             2015-05-15             300000.0                  5.0   \n",
       "1             2015-05-04             200000.0                  2.0   \n",
       "2             2015-05-19             600000.0                  4.0   \n",
       "3             2015-05-09            1000000.0                  5.0   \n",
       "4             2015-05-20             500000.0                  2.0   \n",
       "...                  ...                  ...                  ...   \n",
       "87015         2015-07-31            1000000.0                  5.0   \n",
       "87016         2015-07-31                  0.0                  0.0   \n",
       "87017         2015-07-31                  0.0                  0.0   \n",
       "87018         2015-07-31             800000.0                  5.0   \n",
       "87019         2015-07-31                  0.0                  0.0   \n",
       "\n",
       "       Existing_EMI                        Employer_Name  ... Interest_Rate  \\\n",
       "0               0.0                              CYBOSOL  ...           NaN   \n",
       "1               0.0  TATA CONSULTANCY SERVICES LTD (TCS)  ...         13.25   \n",
       "2               0.0              ALCHEMIST HOSPITALS LTD  ...           NaN   \n",
       "3               0.0                     BIHAR GOVERNMENT  ...           NaN   \n",
       "4           25000.0                 GLOBAL EDGE SOFTWARE  ...           NaN   \n",
       "...             ...                                  ...  ...           ...   \n",
       "87015       14500.0                         MAYO COLLEGE  ...           NaN   \n",
       "87016           0.0       KERALA COMMUNICATORS CABLE LTD  ...         35.50   \n",
       "87017           0.0    BANGALORE INSTITUTE OF TECHNOLOGY  ...           NaN   \n",
       "87018       13660.0             FIRSTSOURCE SOLUTION LTD  ...           NaN   \n",
       "87019           0.0                  GOVERNMENT OF INDIA  ...         13.99   \n",
       "\n",
       "      Processing_Fee EMI_Loan_Submitted Filled_Form  Device_Type  Var2  \\\n",
       "0                NaN                NaN           N  Web-browser     G   \n",
       "1                NaN             6762.9           N  Web-browser     G   \n",
       "2                NaN                NaN           N  Web-browser     B   \n",
       "3                NaN                NaN           N  Web-browser     B   \n",
       "4                NaN                NaN           N  Web-browser     B   \n",
       "...              ...                ...         ...          ...   ...   \n",
       "87015            NaN                NaN           N  Web-browser     G   \n",
       "87016         4800.0            9425.76           Y       Mobile     G   \n",
       "87017            NaN                NaN           N       Mobile     G   \n",
       "87018            NaN                NaN           N  Web-browser     G   \n",
       "87019         3450.0           18851.81           N  Web-browser     G   \n",
       "\n",
       "       Source  Var4 LoggedIn Disbursed  \n",
       "0        S122     1        0       0.0  \n",
       "1        S122     3        0       0.0  \n",
       "2        S143     1        0       0.0  \n",
       "3        S143     3        0       0.0  \n",
       "4        S134     3        1       0.0  \n",
       "...       ...   ...      ...       ...  \n",
       "87015    S122     3        0       0.0  \n",
       "87016    S122     5        0       0.0  \n",
       "87017    S122     3        0       0.0  \n",
       "87018    S122     3        0       0.0  \n",
       "87019    S122     4        0       0.0  \n",
       "\n",
       "[87020 rows x 26 columns]"
      ]
     },
     "execution_count": 2,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "df_train"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2021-09-04T09:02:26.478748Z",
     "iopub.status.busy": "2021-09-04T09:02:26.478302Z",
     "iopub.status.idle": "2021-09-04T09:02:26.626693Z",
     "shell.execute_reply": "2021-09-04T09:02:26.624737Z",
     "shell.execute_reply.started": "2021-09-04T09:02:26.478714Z"
    }
   },
   "source": [
    "数据集，共87020个样本，26个变量，其中目标变量为'Disbursed'，其他为因变量。目标是预测Disbursed值，1为违约，0为未违约。**显然，这是一个二分类问题。**  \n",
    "entity-embedding-rossmann程序是面向回归问题的，因此在处理上有一些不同（后面展开）。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 数据集预处理\n",
    "主要是处理缺失值。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2021-09-04T09:02:28.210015Z",
     "iopub.status.busy": "2021-09-04T09:02:28.209719Z",
     "iopub.status.idle": "2021-09-04T09:02:31.179673Z",
     "shell.execute_reply": "2021-09-04T09:02:31.178305Z",
     "shell.execute_reply.started": "2021-09-04T09:02:28.209987Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "62689 6762.9\n"
     ]
    }
   ],
   "source": [
    "numcols=df_train.select_dtypes(include=['int64','float64']).columns.tolist()\n",
    "numcols.remove('Disbursed')\n",
    "for a in numcols:\n",
    "    df_train[a]=df_train[a].fillna(df_train[a].mean())\n",
    "df_train['Disbursed']=df_train['Disbursed'].fillna(0)\n",
    "#df_train.info()\n",
    "\n",
    "def is_number(s):\n",
    "    try:\n",
    "        float(s)\n",
    "        return True\n",
    "    except ValueError:\n",
    "        pass\n",
    "    try:\n",
    "        import unicodedata\n",
    "        unicodedata.numeric(s)\n",
    "        return True\n",
    "    except (TypeError, ValueError):\n",
    "        pass\n",
    "    return False\n",
    "\n",
    "for i in range(df_train['EMI_Loan_Submitted'].shape[0]):\n",
    "    if is_number(df_train['EMI_Loan_Submitted'].iloc[i])==False:\n",
    "        print(i,df_train['EMI_Loan_Submitted'].iloc[1])\n",
    "df_train['EMI_Loan_Submitted'].loc[62689]\n",
    "df_train.at[62689,'EMI_Loan_Submitted']=6762.9\n",
    "df_train['EMI_Loan_Submitted']=df_train['EMI_Loan_Submitted'].fillna(0)\n",
    "df_train['EMI_Loan_Submitted']=df_train['EMI_Loan_Submitted'].astype('float')\n",
    "\n",
    "nulobjcols=['City','Employer_Name','Salary_Account','Var1']\n",
    "#df_train[nulobjcols].describe()\n",
    "\n",
    "for a in nulobjcols:\n",
    "    df_train[a]=df_train[a].fillna('unknown')\n",
    "df_train['Disbursed']=df_train['Disbursed'].astype('int')\n",
    "df_train.to_csv('train.csv',index=False)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "经过上面处理的数据集中没有缺失值了。保存为csv格式方便entity-embedding-rossmann程序处理。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2021-09-04T09:02:31.181598Z",
     "iopub.status.busy": "2021-09-04T09:02:31.181263Z",
     "iopub.status.idle": "2021-09-04T09:02:31.766145Z",
     "shell.execute_reply": "2021-09-04T09:02:31.765032Z",
     "shell.execute_reply.started": "2021-09-04T09:02:31.181566Z"
    }
   },
   "outputs": [],
   "source": [
    "df_train=pd.read_csv('train.csv',low_memory=False)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 数据格式转换\n",
    "这一部分就是entity-embedding-rossmann程序中extract_csv_files.py部分。  \n",
    "\n",
    "主要作用是将数据转换为字典格式。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2021-09-04T09:02:32.506914Z",
     "iopub.status.busy": "2021-09-04T09:02:32.506596Z",
     "iopub.status.idle": "2021-09-04T09:02:34.795020Z",
     "shell.execute_reply": "2021-09-04T09:02:34.793781Z",
     "shell.execute_reply.started": "2021-09-04T09:02:32.506881Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "['ID', 'Gender', 'City', 'Monthly_Income', 'DOB', 'Lead_Creation_Date', 'Loan_Amount_Applied', 'Loan_Tenure_Applied', 'Existing_EMI', 'Employer_Name', 'Salary_Account', 'Mobile_Verified', 'Var5', 'Var1', 'Loan_Amount_Submitted', 'Loan_Tenure_Submitted', 'Interest_Rate', 'Processing_Fee', 'EMI_Loan_Submitted', 'Filled_Form', 'Device_Type', 'Var2', 'Source', 'Var4', 'LoggedIn', 'Disbursed']\n",
      "[{'ID': 'ID124821V10', 'Gender': 'Male', 'City': 'Mumbai', 'Monthly_Income': '42300', 'DOB': '1988-10-31', 'Lead_Creation_Date': '2015-07-31', 'Loan_Amount_Applied': '0.0', 'Loan_Tenure_Applied': '0.0', 'Existing_EMI': '0.0', 'Employer_Name': 'GOVERNMENT OF INDIA', 'Salary_Account': 'unknown', 'Mobile_Verified': 'Y', 'Var5': '12', 'Var1': 'HBXA', 'Loan_Amount_Submitted': '690000.0', 'Loan_Tenure_Submitted': '4.0', 'Interest_Rate': '13.99', 'Processing_Fee': '3450.0', 'EMI_Loan_Submitted': '18851.81', 'Filled_Form': 'N', 'Device_Type': 'Web-browser', 'Var2': 'G', 'Source': 'S122', 'Var4': '4', 'LoggedIn': '0', 'Disbursed': '0'}, {'ID': 'ID124818S30', 'Gender': 'Male', 'City': 'Bengaluru', 'Monthly_Income': '98930', 'DOB': '1977-04-27', 'Lead_Creation_Date': '2015-07-31', 'Loan_Amount_Applied': '800000.0', 'Loan_Tenure_Applied': '5.0', 'Existing_EMI': '13660.0', 'Employer_Name': 'FIRSTSOURCE SOLUTION LTD', 'Salary_Account': 'ICICI Bank', 'Mobile_Verified': 'Y', 'Var5': '18', 'Var1': 'HBXX', 'Loan_Amount_Submitted': '800000.0', 'Loan_Tenure_Submitted': '5.0', 'Interest_Rate': '19.197474211930288', 'Processing_Fee': '5131.150838803793', 'EMI_Loan_Submitted': '0.0', 'Filled_Form': 'N', 'Device_Type': 'Web-browser', 'Var2': 'G', 'Source': 'S122', 'Var4': '3', 'LoggedIn': '0', 'Disbursed': '0'}, {'ID': 'ID124816Q10', 'Gender': 'Male', 'City': 'Bengaluru', 'Monthly_Income': '118000', 'DOB': '1972-01-28', 'Lead_Creation_Date': '2015-07-31', 'Loan_Amount_Applied': '0.0', 'Loan_Tenure_Applied': '0.0', 'Existing_EMI': '0.0', 'Employer_Name': 'BANGALORE INSTITUTE OF TECHNOLOGY', 'Salary_Account': 'Syndicate Bank', 'Mobile_Verified': 'Y', 'Var5': '8', 'Var1': 'HBXX', 'Loan_Amount_Submitted': '1200000.0', 'Loan_Tenure_Submitted': '4.0', 'Interest_Rate': '19.197474211930288', 'Processing_Fee': '5131.150838803793', 'EMI_Loan_Submitted': '0.0', 'Filled_Form': 'N', 'Device_Type': 'Mobile', 'Var2': 'G', 'Source': 'S122', 'Var4': '3', 'LoggedIn': '0', 'Disbursed': '0'}]\n"
     ]
    }
   ],
   "source": [
    "import pickle\n",
    "import csv\n",
    "\n",
    "def csv2dicts(csvfile):\n",
    "    data = []\n",
    "    keys = []\n",
    "    for row_index, row in enumerate(csvfile):\n",
    "        if row_index == 0:\n",
    "            keys = row\n",
    "            print(row)\n",
    "            continue\n",
    "        # if row_index % 10000 == 0:\n",
    "        #     print(row_index)\n",
    "        data.append({key: value for key, value in zip(keys, row)})\n",
    "    return data\n",
    "\n",
    "\n",
    "def set_nan_as_string(data, replace_str='0'):\n",
    "    for i, x in enumerate(data):\n",
    "        for key, value in x.items():\n",
    "            if value == '':\n",
    "                x[key] = replace_str\n",
    "        data[i] = x\n",
    "\n",
    "\n",
    "train_data = \"train.csv\"\n",
    "\n",
    "with open(train_data,encoding ='utf8') as csvfile:\n",
    "    data = csv.reader(csvfile, delimiter=',')\n",
    "    with open('train_data.pickle', 'wb') as f:\n",
    "        data = csv2dicts(data)\n",
    "        data = data[::-1]\n",
    "        pickle.dump(data, f, -1)\n",
    "        print(data[:3])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 特征工程\n",
    "这一部分就是entity-embedding-rossmann程序中prepare_features.py部分。  \n",
    "\n",
    "主要作用准备特征。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2021-09-04T09:02:49.561642Z",
     "iopub.status.busy": "2021-09-04T09:02:49.561227Z",
     "iopub.status.idle": "2021-09-04T09:02:49.600699Z",
     "shell.execute_reply": "2021-09-04T09:02:49.599497Z",
     "shell.execute_reply.started": "2021-09-04T09:02:49.561596Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Number of train datapoints:  87020\n",
      "0 1\n",
      "[   1  446 4738   49    1   24    2   24    0    0    0   58    2    4\n",
      "    8  170    4    7  318 1746    1    2    6    1    4    0] 0\n"
     ]
    }
   ],
   "source": [
    "import pickle\n",
    "from datetime import datetime\n",
    "from sklearn import preprocessing\n",
    "import numpy as np\n",
    "import random\n",
    "random.seed(42)\n",
    "\n",
    "with open('train_data.pickle', 'rb') as f:\n",
    "    train_data = pickle.load(f)\n",
    "    num_records = len(train_data)\n",
    "\n",
    "def feature_list(record):\n",
    "    ID=record['ID']\n",
    "    Gender=record['Gender']\n",
    "    City=record['City']\n",
    "    Monthly_Income=int(record['Monthly_Income'])\n",
    "    #DOB\n",
    "    dt = datetime.strptime(record['DOB'], '%Y-%m-%d')\n",
    "    DOB_year = dt.year\n",
    "    DOB_month = dt.month\n",
    "    DOB_day = dt.day\n",
    "    #Lead_Creation_Date\n",
    "    dt = datetime.strptime(record['Lead_Creation_Date'], '%Y-%m-%d')\n",
    "    LCD_year = dt.year\n",
    "    LCD_month = dt.month\n",
    "    LCD_day = dt.day\n",
    "\n",
    "    Loan_Amount_Applied=float(record['Loan_Amount_Applied'])\n",
    "    Loan_Tenure_Applied=float(record['Loan_Tenure_Applied'])\n",
    "    Existing_EMI=float(record['Existing_EMI'])\n",
    "    Employer_Name=record['Employer_Name']\n",
    "    Salary_Account=record['Salary_Account']\n",
    "    Mobile_Verified=record['Mobile_Verified']\n",
    "    Var5=record['Var5']\n",
    "    Var1=record['Var1']\n",
    "    Loan_Amount_Submitted=float(record['Loan_Amount_Submitted'])\n",
    "    Loan_Tenure_Submitted=float(record['Loan_Tenure_Submitted'])\n",
    "    Interest_Rate=float(record['Interest_Rate'])\n",
    "    Processing_Fee=float(record['Processing_Fee'])\n",
    "    EMI_Loan_Submitted=float(record['EMI_Loan_Submitted'])\n",
    "    Filled_Form=record['Filled_Form']\n",
    "    Device_Type=record['Device_Type']\n",
    "    Var2=record['Var2']\n",
    "    Source=record['Source']\n",
    "    Var4=int(record['Var4'])\n",
    "    LoggedIn=int(record['LoggedIn'])\n",
    "\n",
    "    return [#ID,\n",
    "            Gender,\n",
    "            City,\n",
    "            Monthly_Income,\n",
    "            DOB_year,DOB_month,DOB_day,\n",
    "            #LCD_year,\n",
    "            LCD_month,LCD_day,\n",
    "            Loan_Amount_Applied,\n",
    "            Loan_Tenure_Applied,\n",
    "            Existing_EMI,\n",
    "            #Employer_Name,\n",
    "            Salary_Account,\n",
    "            Mobile_Verified,\n",
    "            Var5,\n",
    "            Var1,\n",
    "            Loan_Amount_Submitted,\n",
    "            Loan_Tenure_Submitted,\n",
    "            Interest_Rate,\n",
    "            Processing_Fee,\n",
    "            EMI_Loan_Submitted,\n",
    "            Filled_Form,\n",
    "            Device_Type,\n",
    "            Var2,\n",
    "            Source,\n",
    "            Var4,\n",
    "            LoggedIn\n",
    "            ]\n",
    "\n",
    "train_data_X = []\n",
    "train_data_y = []\n",
    "\n",
    "for record in train_data:\n",
    "    fl = feature_list(record)\n",
    "    train_data_X.append(fl)\n",
    "    train_data_y.append(int(record['Disbursed']))\n",
    "print(\"Number of train datapoints: \", len(train_data_y))\n",
    "\n",
    "print(min(train_data_y), max(train_data_y))\n",
    "\n",
    "full_X = train_data_X\n",
    "full_X = np.array(full_X)\n",
    "train_data_X = np.array(train_data_X)\n",
    "les = []\n",
    "for i in range(train_data_X.shape[1]):\n",
    "    le = preprocessing.LabelEncoder()\n",
    "    le.fit(full_X[:, i])\n",
    "    les.append(le)\n",
    "    train_data_X[:, i] = le.transform(train_data_X[:, i])\n",
    "\n",
    "with open('les.pickle', 'wb') as f:\n",
    "    pickle.dump(les, f, -1)\n",
    "\n",
    "train_data_X = train_data_X.astype(int)\n",
    "train_data_y = np.array(train_data_y)\n",
    "\n",
    "with open('feature_train_data.pickle', 'wb') as f:\n",
    "    pickle.dump((train_data_X, train_data_y), f, -1)\n",
    "    print(train_data_X[0], train_data_y[0])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 建模\n",
    "这一部分就是entity-embedding-rossmann程序中model.py部分。  \n",
    "\n",
    "主要作用是建立模型。\n",
    "\n",
    "这里特别要注意：\n",
    "1. split_features重新改写，原程序太定制化了。\n",
    "2. __build_keras_model重新改写，原来程序太定制化。\n",
    "3. evaluate函数中对于可以出现min(y_val)==0的情况进行了处理（因为是二分类问题，0：表示未违约，很有可能出现0），处理时加上一个很小的数(1e-5)，防止出现除0异常。\n",
    "4. _val_for_fit和_val_for_pred去除了取log取反log的操作，这里不需要，原来估计是因为回归问题处理时，有可能出现很大的值。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2021-09-04T09:02:49.602626Z",
     "iopub.status.busy": "2021-09-04T09:02:49.602280Z",
     "iopub.status.idle": "2021-09-04T09:02:49.615607Z",
     "shell.execute_reply": "2021-09-04T09:02:49.614440Z",
     "shell.execute_reply.started": "2021-09-04T09:02:49.602594Z"
    }
   },
   "outputs": [],
   "source": [
    "import numpy\n",
    "numpy.random.seed(123)\n",
    "from sklearn import linear_model\n",
    "from sklearn.ensemble import RandomForestRegressor\n",
    "from sklearn.svm import SVR\n",
    "from sklearn.preprocessing import StandardScaler\n",
    "import xgboost as xgb\n",
    "from sklearn import neighbors\n",
    "from sklearn.preprocessing import Normalizer\n",
    "\n",
    "from keras.models import Sequential\n",
    "from keras.models import Model as KerasModel\n",
    "from keras.layers import Input, Dense, Activation, Reshape, Dropout\n",
    "from keras.layers import Concatenate\n",
    "from keras.layers.embeddings import Embedding\n",
    "from keras.callbacks import ModelCheckpoint\n",
    "\n",
    "import pickle\n",
    "\n",
    "\n",
    "def embed_features(X, saved_embeddings_fname):\n",
    "    # f_embeddings = open(\"embeddings_shuffled.pickle\", \"rb\")\n",
    "    f_embeddings = open(saved_embeddings_fname, \"rb\")\n",
    "    embeddings = pickle.load(f_embeddings)\n",
    "\n",
    "    index_embedding_mapping = {1: 0, 2: 1, 4: 2, 5: 3, 6: 4, 7: 5}\n",
    "    X_embedded = []\n",
    "\n",
    "    (num_records, num_features) = X.shape\n",
    "    for record in X:\n",
    "        embedded_features = []\n",
    "        for i, feat in enumerate(record):\n",
    "            feat = int(feat)\n",
    "            if i not in index_embedding_mapping.keys():\n",
    "                embedded_features += [feat]\n",
    "            else:\n",
    "                embedding_index = index_embedding_mapping[i]\n",
    "                embedded_features += embeddings[embedding_index][feat].tolist()\n",
    "\n",
    "        X_embedded.append(embedded_features)\n",
    "\n",
    "    return numpy.array(X_embedded)\n",
    "\n",
    "\n",
    "def split_features(X):\n",
    "    X_list = []\n",
    "    for i in range(X.shape[1]):\n",
    "        s= X[..., [i]]\n",
    "        X_list.append(s)\n",
    "    return X_list"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2021-09-04T09:02:49.617742Z",
     "iopub.status.busy": "2021-09-04T09:02:49.617307Z",
     "iopub.status.idle": "2021-09-04T09:02:49.632444Z",
     "shell.execute_reply": "2021-09-04T09:02:49.631528Z",
     "shell.execute_reply.started": "2021-09-04T09:02:49.617698Z"
    }
   },
   "outputs": [],
   "source": [
    "embed_cols=['Gender','City','Monthly_Income','DOB_year','DOB_month','DOB_day',\n",
    "            'LCD_month','LCD_day','Loan_Amount_Applied','Loan_Tenure_Applied',\n",
    "            'Existing_EMI','Salary_Account','Mobile_Verified','Var5','Var1',\n",
    "            'Loan_Amount_Submitted','Loan_Tenure_Submitted','Interest_Rate',\n",
    "            'Processing_Fee','EMI_Loan_Submitted','Filled_Form','Device_Type',\n",
    "            'Var2','Source','Var4','LoggedIn']"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2021-09-04T09:02:49.833763Z",
     "iopub.status.busy": "2021-09-04T09:02:49.833300Z",
     "iopub.status.idle": "2021-09-04T09:02:49.919270Z",
     "shell.execute_reply": "2021-09-04T09:02:49.918227Z",
     "shell.execute_reply.started": "2021-09-04T09:02:49.833715Z"
    }
   },
   "outputs": [],
   "source": [
    "import pickle\n",
    "import pandas as pd\n",
    "f = open('feature_train_data.pickle', 'rb')\n",
    "(X, y) = pickle.load(f)\n",
    "\n",
    "df=pd.DataFrame(X,columns=embed_cols)\n",
    "\n",
    "col_vals_dict = {c: list(df[c].unique()) for c in embed_cols}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2021-09-04T09:02:49.921839Z",
     "iopub.status.busy": "2021-09-04T09:02:49.921372Z",
     "iopub.status.idle": "2021-09-04T09:02:49.943871Z",
     "shell.execute_reply": "2021-09-04T09:02:49.942756Z",
     "shell.execute_reply.started": "2021-09-04T09:02:49.921789Z"
    }
   },
   "outputs": [],
   "source": [
    "class Model(object):\n",
    "\n",
    "    def evaluate(self, X_val, y_val):\n",
    "        #assert(min(y_val) > 0)\n",
    "        y_val1=y_val+1e-5\n",
    "        guessed_sales = self.guess(X_val)\n",
    "        relative_err = numpy.absolute((y_val - guessed_sales) / y_val1)\n",
    "        result = numpy.sum(relative_err) / len(y_val)\n",
    "        return result\n",
    "\n",
    "class NN_with_EntityEmbedding(Model):\n",
    "\n",
    "    def __init__(self, X_train, y_train, X_val, y_val):\n",
    "        super().__init__()\n",
    "        self.epochs = 10\n",
    "        self.checkpointer = ModelCheckpoint(filepath=\"best_model_weights.hdf5\", verbose=1, save_best_only=True)\n",
    "        self.max_log_y = max(numpy.max(numpy.log(y_train)), numpy.max(numpy.log(y_val)))\n",
    "        self.__build_keras_model()\n",
    "        self.fit(X_train, y_train, X_val, y_val)\n",
    "\n",
    "    def preprocessing(self, X):\n",
    "        X_list = split_features(X)\n",
    "        return X_list\n",
    "    \n",
    "    def build_embedding_network(self):\n",
    "        inputs = []\n",
    "        embeddings = []\n",
    "        for i in range(len(embed_cols)):\n",
    "            cate_input = Input(shape=(1,))\n",
    "            input_dim = len(col_vals_dict[embed_cols[i]])\n",
    "            if input_dim > 1000:\n",
    "                output_dim = 50\n",
    "            else:\n",
    "                output_dim = (len(col_vals_dict[embed_cols[i]]) // 2) + 1\n",
    "\n",
    "            embedding = Embedding(input_dim, output_dim, input_length=1,name=embed_cols[i])(cate_input)\n",
    "            embedding = Reshape(target_shape=(output_dim,))(embedding)\n",
    "            inputs.append(cate_input)\n",
    "            embeddings.append(embedding)\n",
    "\n",
    "        #input_numeric = Input(shape=(4,))\n",
    "        #embedding_numeric = Dense(5)(input_numeric)\n",
    "        #inputs.append(input_numeric)\n",
    "        #embeddings.append(embedding_numeric)\n",
    "\n",
    "        x = Concatenate()(embeddings)\n",
    "        x = Dense(1000, activation='relu')(x)\n",
    "        x = Dropout(.35)(x)\n",
    "        x = Dense(500, activation='relu')(x)\n",
    "        x = Dropout(.15)(x)\n",
    "        output = Dense(1, activation='sigmoid')(x)\n",
    "\n",
    "        self.model = KerasModel(inputs, output)\n",
    "        self.model.compile(loss='mean_absolute_error', optimizer='adam')\n",
    "\n",
    "        #self.model.compile(loss='binary_crossentropy', optimizer='rmsprop')\n",
    "\n",
    "        #return model\n",
    "\n",
    "\n",
    "    def __build_keras_model(self):\n",
    "        \n",
    "        self.build_embedding_network()\n",
    "        return\n",
    "\n",
    "    def _val_for_fit(self, val):\n",
    "        #val = numpy.log(val) / self.max_log_y\n",
    "        return val\n",
    "\n",
    "    def _val_for_pred(self, val):\n",
    "        return val#numpy.exp(val * self.max_log_y)\n",
    "\n",
    "    def fit(self, X_train, y_train, X_val, y_val):\n",
    "        self.model.fit(self.preprocessing(X_train), self._val_for_fit(y_train),\n",
    "                       validation_data=(self.preprocessing(X_val), self._val_for_fit(y_val)),\n",
    "                       epochs=self.epochs, batch_size=128,\n",
    "                       # callbacks=[self.checkpointer],\n",
    "                       )\n",
    "        # self.model.load_weights('best_model_weights.hdf5')\n",
    "        print(\"Result on validation data: \", self.evaluate(X_val, y_val))\n",
    "\n",
    "    def guess(self, features):\n",
    "        features = self.preprocessing(features)\n",
    "        result = self.model.predict(features).flatten()\n",
    "        return self._val_for_pred(result)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 训练及测试\n",
    "这一部分就是entity-embedding-rossmann程序中train_test_model.py部分。  \n",
    "\n",
    "主要作用是训练与测试模型结果。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2021-09-04T09:02:49.996554Z",
     "iopub.status.busy": "2021-09-04T09:02:49.995987Z",
     "iopub.status.idle": "2021-09-04T09:18:33.665192Z",
     "shell.execute_reply": "2021-09-04T09:18:33.664093Z",
     "shell.execute_reply.started": "2021-09-04T09:02:49.996507Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Number of samples used for training: 78318\n",
      "Fitting NN_with_EntityEmbedding...\n",
      "The turn  0\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "<ipython-input-10-695e4f9ed55e>:17: RuntimeWarning: divide by zero encountered in log\n",
      "  self.max_log_y = max(numpy.max(numpy.log(y_train)), numpy.max(numpy.log(y_val)))\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch 1/10\n",
      "612/612 [==============================] - 76s 70ms/step - loss: 0.0301 - val_loss: 0.0167\n",
      "Epoch 2/10\n",
      "612/612 [==============================] - 41s 67ms/step - loss: 0.0157 - val_loss: 0.0167\n",
      "Epoch 3/10\n",
      "612/612 [==============================] - 41s 67ms/step - loss: 0.0151 - val_loss: 0.0167\n",
      "Epoch 4/10\n",
      "612/612 [==============================] - 42s 68ms/step - loss: 0.0154 - val_loss: 0.0167\n",
      "Epoch 5/10\n",
      "612/612 [==============================] - 44s 71ms/step - loss: 0.0149 - val_loss: 0.0167\n",
      "Epoch 6/10\n",
      "612/612 [==============================] - 45s 74ms/step - loss: 0.0151 - val_loss: 0.0167\n",
      "Epoch 7/10\n",
      "612/612 [==============================] - 45s 74ms/step - loss: 0.0150 - val_loss: 0.0167\n",
      "Epoch 8/10\n",
      "612/612 [==============================] - 43s 70ms/step - loss: 0.0154 - val_loss: 0.0167\n",
      "Epoch 9/10\n",
      "612/612 [==============================] - 42s 69ms/step - loss: 0.0148 - val_loss: 0.0167\n",
      "Epoch 10/10\n",
      "612/612 [==============================] - 42s 68ms/step - loss: 0.0161 - val_loss: 0.0167\n",
      "Result on validation data:  0.017151090891937004\n",
      "The turn  1\n",
      "Epoch 1/10\n",
      "612/612 [==============================] - 52s 71ms/step - loss: 0.0292 - val_loss: 0.0167\n",
      "Epoch 2/10\n",
      "612/612 [==============================] - 40s 66ms/step - loss: 0.0151 - val_loss: 0.0167\n",
      "Epoch 3/10\n",
      "612/612 [==============================] - 41s 68ms/step - loss: 0.0145 - val_loss: 0.0167\n",
      "Epoch 4/10\n",
      "612/612 [==============================] - 41s 67ms/step - loss: 0.0159 - val_loss: 0.0167\n",
      "Epoch 5/10\n",
      "612/612 [==============================] - 41s 67ms/step - loss: 0.0150 - val_loss: 0.0167\n",
      "Epoch 6/10\n",
      "612/612 [==============================] - 41s 67ms/step - loss: 0.0156 - val_loss: 0.0167\n",
      "Epoch 7/10\n",
      "612/612 [==============================] - 41s 67ms/step - loss: 0.0153 - val_loss: 0.0167\n",
      "Epoch 8/10\n",
      "612/612 [==============================] - 42s 68ms/step - loss: 0.0153 - val_loss: 0.0167\n",
      "Epoch 9/10\n",
      "612/612 [==============================] - 42s 68ms/step - loss: 0.0153 - val_loss: 0.0167\n",
      "Epoch 10/10\n",
      "612/612 [==============================] - 41s 67ms/step - loss: 0.0155 - val_loss: 0.0167\n",
      "Result on validation data:  0.017856893118558908\n",
      "The turn  2\n",
      "Epoch 1/10\n",
      "612/612 [==============================] - 51s 70ms/step - loss: 0.0301 - val_loss: 0.0167\n",
      "Epoch 2/10\n",
      "612/612 [==============================] - 41s 67ms/step - loss: 0.0153 - val_loss: 0.0167\n",
      "Epoch 3/10\n",
      "612/612 [==============================] - 41s 67ms/step - loss: 0.0149 - val_loss: 0.0167\n",
      "Epoch 4/10\n",
      "612/612 [==============================] - 40s 65ms/step - loss: 0.0152 - val_loss: 0.0167\n",
      "Epoch 5/10\n",
      "612/612 [==============================] - 41s 67ms/step - loss: 0.0148 - val_loss: 0.0167\n",
      "Epoch 6/10\n",
      "612/612 [==============================] - 42s 68ms/step - loss: 0.0149 - val_loss: 0.0167\n",
      "Epoch 7/10\n",
      "612/612 [==============================] - 43s 70ms/step - loss: 0.0158 - val_loss: 0.0167\n",
      "Epoch 8/10\n",
      "612/612 [==============================] - 42s 68ms/step - loss: 0.0151 - val_loss: 0.0167\n",
      "Epoch 9/10\n",
      "612/612 [==============================] - 42s 68ms/step - loss: 0.0150 - val_loss: 0.0167\n",
      "Epoch 10/10\n",
      "612/612 [==============================] - 41s 66ms/step - loss: 0.0153 - val_loss: 0.0167\n",
      "Result on validation data:  0.01810633511853432\n",
      "The turn  3\n",
      "Epoch 1/10\n",
      "612/612 [==============================] - 54s 74ms/step - loss: 0.0300 - val_loss: 0.0167\n",
      "Epoch 2/10\n",
      "612/612 [==============================] - 48s 78ms/step - loss: 0.0155 - val_loss: 0.0167\n",
      "Epoch 3/10\n",
      "612/612 [==============================] - 54s 88ms/step - loss: 0.0152 - val_loss: 0.0167\n",
      "Epoch 4/10\n",
      "612/612 [==============================] - 52s 85ms/step - loss: 0.0160 - val_loss: 0.0167\n",
      "Epoch 5/10\n",
      "612/612 [==============================] - 49s 80ms/step - loss: 0.0159 - val_loss: 0.0167\n",
      "Epoch 6/10\n",
      "612/612 [==============================] - 43s 71ms/step - loss: 0.0152 - val_loss: 0.0167\n",
      "Epoch 7/10\n",
      "612/612 [==============================] - 44s 71ms/step - loss: 0.0157 - val_loss: 0.0167\n",
      "Epoch 8/10\n",
      "612/612 [==============================] - 42s 69ms/step - loss: 0.0153 - val_loss: 0.0167\n",
      "Epoch 9/10\n",
      "612/612 [==============================] - 42s 69ms/step - loss: 0.0154 - val_loss: 0.0167\n",
      "Epoch 10/10\n",
      "612/612 [==============================] - 43s 70ms/step - loss: 0.0152 - val_loss: 0.0167\n",
      "Result on validation data:  0.016900734703634547\n",
      "The turn  4\n",
      "Epoch 1/10\n",
      "612/612 [==============================] - 59s 78ms/step - loss: 0.0304 - val_loss: 0.0167\n",
      "Epoch 2/10\n",
      "612/612 [==============================] - 44s 71ms/step - loss: 0.0154 - val_loss: 0.0167\n",
      "Epoch 3/10\n",
      "612/612 [==============================] - 47s 77ms/step - loss: 0.0147 - val_loss: 0.0167\n",
      "Epoch 4/10\n",
      "612/612 [==============================] - 47s 77ms/step - loss: 0.0156 - val_loss: 0.0167\n",
      "Epoch 5/10\n",
      "612/612 [==============================] - 46s 75ms/step - loss: 0.0156 - val_loss: 0.0167\n",
      "Epoch 6/10\n",
      "612/612 [==============================] - 42s 69ms/step - loss: 0.0161 - val_loss: 0.0167\n",
      "Epoch 7/10\n",
      "612/612 [==============================] - 44s 73ms/step - loss: 0.0152 - val_loss: 0.0167\n",
      "Epoch 8/10\n",
      "612/612 [==============================] - 47s 76ms/step - loss: 0.0152 - val_loss: 0.0167\n",
      "Epoch 9/10\n",
      "612/612 [==============================] - 45s 73ms/step - loss: 0.0151 - val_loss: 0.0167\n",
      "Epoch 10/10\n",
      "612/612 [==============================] - 44s 72ms/step - loss: 0.0156 - val_loss: 0.0167\n",
      "Result on validation data:  0.016830779131364\n",
      "Finish training!\n"
     ]
    }
   ],
   "source": [
    "import pickle\n",
    "import numpy\n",
    "numpy.random.seed(123)\n",
    "#from models import *\n",
    "from sklearn.preprocessing import OneHotEncoder\n",
    "import sys\n",
    "sys.setrecursionlimit(10000)\n",
    "\n",
    "train_ratio = 0.9\n",
    "shuffle_data = False\n",
    "one_hot_as_input = False\n",
    "embeddings_as_input = False\n",
    "save_embeddings = True\n",
    "saved_embeddings_fname = \"embeddings.pickle\"  # set save_embeddings to True to create this file\n",
    "\n",
    "f = open('feature_train_data.pickle', 'rb')\n",
    "(X, y) = pickle.load(f)\n",
    "\n",
    "num_records = len(X)\n",
    "train_size = int(train_ratio * num_records)\n",
    "\n",
    "if shuffle_data:\n",
    "    print(\"Using shuffled data\")\n",
    "    sh = numpy.arange(X.shape[0])\n",
    "    numpy.random.shuffle(sh)\n",
    "    X = X[sh]\n",
    "    y = y[sh]\n",
    "\n",
    "if embeddings_as_input:\n",
    "    print(\"Using learned embeddings as input\")\n",
    "    X = embed_features(X, saved_embeddings_fname)\n",
    "\n",
    "if one_hot_as_input:\n",
    "    print(\"Using one-hot encoding as input\")\n",
    "    enc = OneHotEncoder(sparse=False)\n",
    "    enc.fit(X)\n",
    "    X = enc.transform(X)\n",
    "\n",
    "X_train = X[:train_size]\n",
    "X_val = X[train_size:]\n",
    "y_train = y[:train_size]\n",
    "y_val = y[train_size:]\n",
    "\n",
    "\n",
    "def sample(X, y, n):\n",
    "    '''random samples'''\n",
    "    num_row = X.shape[0]\n",
    "    indices = numpy.random.randint(num_row, size=n)\n",
    "    return X[indices, :], y[indices]\n",
    "\n",
    "\n",
    "X_train, y_train = sample(X_train, y_train, X_train.shape[0])  # Simulate data sparsity\n",
    "print(\"Number of samples used for training: \" + str(y_train.shape[0]))\n",
    "\n",
    "models = []\n",
    "\n",
    "print(\"Fitting NN_with_EntityEmbedding...\")\n",
    "for i in range(5):\n",
    "    print('The turn ',i)\n",
    "    models.append(NN_with_EntityEmbedding(X_train, y_train, X_val, y_val))\n",
    "\n",
    "# print(\"Fitting NN...\")\n",
    "# for i in range(5):\n",
    "#     models.append(NN(X_train, y_train, X_val, y_val))\n",
    "\n",
    "# print(\"Fitting RF...\")\n",
    "# models.append(RF(X_train, y_train, X_val, y_val))\n",
    "\n",
    "# print(\"Fitting KNN...\")\n",
    "# models.append(KNN(X_train, y_train, X_val, y_val))\n",
    "\n",
    "# print(\"Fitting XGBoost...\")\n",
    "# models.append(XGBoost(X_train, y_train, X_val, y_val))\n",
    "print('Finish training!')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2021-09-04T09:18:33.677477Z",
     "iopub.status.busy": "2021-09-04T09:18:33.677121Z",
     "iopub.status.idle": "2021-09-04T09:19:32.041819Z",
     "shell.execute_reply": "2021-09-04T09:19:32.040686Z",
     "shell.execute_reply.started": "2021-09-04T09:18:33.677444Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Evaluate combined models...\n",
      "Training error...\n",
      "0.015253042365235385\n",
      "Validation error...\n",
      "0.017369166578764635\n"
     ]
    }
   ],
   "source": [
    "from sklearn.metrics import accuracy_score,precision_score\n",
    "\n",
    "def evaluate_models(models, X, y):\n",
    "    #assert(min(y) > 0)\n",
    "    y1=y+1e-5\n",
    "    guessed_sales = numpy.array([model.guess(X) for model in models])\n",
    "    mean_sales = guessed_sales.mean(axis=0)\n",
    "    #print('y shape',y.shape)\n",
    "    #print('y_pred shape',mean_sales.shape)\n",
    "    #print('accuracy: ',accuracy_score(y,mean_sales))\n",
    "    #print('precision: ',precision_score(y,mean_sales))\n",
    "    relative_err = numpy.absolute((y - mean_sales) / y1)\n",
    "    result = numpy.sum(relative_err) / len(y)\n",
    "    return result\n",
    "\n",
    "print(\"Evaluate combined models...\")\n",
    "print(\"Training error...\")\n",
    "r_train = evaluate_models(models, X_train, y_train)\n",
    "print(r_train)\n",
    "\n",
    "print(\"Validation error...\")\n",
    "r_val = evaluate_models(models, X_val, y_val)\n",
    "print(r_val)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.8"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
