{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# ESMM 简介\n",
    "\n",
    "Entire Space Multi-task Model (ESMM)[1] 是阿里妈妈精准定向广告算法团队研发的新型多任务联合训练算法范式。\n",
    "\n",
    "在诸如信息检索、推荐系统、在线广告投放系统等工业级的应用中准确预估转化率（post-click conversion rate，CVR）是至关重要的。例如，在电商平台的推荐系统中，最大化场景商品交易总额（GMV）是平台的重要目标之一，而GMV可以拆解为流量×点击率×转化率×客单价，可见转化率是优化目标的重要因子；从用户体验的角度来说准确预估的转换率被用来平衡用户的点击偏好与购买偏好。\n",
    "\n",
    "传统的CVR预估任务通常采用类似于CTR预估的技术，比如最近很流行的深度学习模型。然而，有别于CTR预估任务，CVR预估任务面临一些特有的挑战：1) 样本选择偏差；2) 训练数据稀疏；3) 延迟反馈等。\n",
    "\n",
    "ESMM模型利用用户行为序列数据在完整样本空间建模，避免了传统CVR模型经常遭遇的样本选择偏差和训练数据稀疏的问题，取得了显著的效果。另一方面，ESMM模型首次提出了利用学习CTR和CTCVR的辅助任务迂回学习CVR的思路。ESMM模型中的BASE子网络可以替换为任意的学习模型，因此ESMM的框架可以非常容易地和其他学习模型集成，从而吸收其他学习模型的优势，进一步提升学习效果，想象空间巨大。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 文档内容说明\n",
    "本文旨在介绍ESMM以及如何使用ESMM开源项目进行实际业务生产所用，阅读完成后，你可以了解到：\n",
    "\n",
    "* ESMM的基本系统组成\n",
    "* ESMM开源代码的运行和使用\n",
    "* 应用ESMM到具体实践的方法\n",
    "\n",
    "受限于篇幅以及主旨，以下内容本文不涉及，或请参阅相关文档：\n",
    "* 公开数据集的下载、使用和授权\n",
    "    * Ali-CCP：Alibaba Click and Conversion Prediction请参阅：[https://tianchi.aliyun.com/datalab/dataSet.html?dataId=408](https://tianchi.aliyun.com/datalab/dataSet.html?dataId=408)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## ESMM 适用的问题\n",
    "\n",
    "\n",
    "<img src=\"assets/impression_click_buy.png\"/>\n",
    "\n",
    "\n",
    "ESMM 充分利用用户行为的序列模式，在 CTR 和 CTCVR 两项辅助任务的帮助下，优雅地解决了在实践中遇到的 CVR 建模 \\$\\textbf{SSB}\\$ 和 \\$\\textbf{DS}\\$ 的挑战。ESMM 可以很容易地推广到具有序列依赖性的用户行为(浏览、点击、加购、购买等)预估中，构建跨域多场景全链路预估模型。\n",
    "\n",
    "\n",
    "\n",
    "<img src=\"assets/system_overview.png\"/>\n",
    "\n",
    "广告或推荐系统中，用户行为的系统链路可以表示为 \\$召回 \\rightarrow  粗排 \\rightarrow 精排 \\rightarrow 展现 \\rightarrow 点击 \\rightarrow 转化 \\rightarrow 复购 \\$ 的序列。通常我们在引擎请求的时候进行多阶段的综合排序并不断选取头部的子集传给下一级，最终在展现阶段返回给用户。每阶段任务的输入量级都会因为上一阶段任务经过系统筛选（比如 召回到粗排、粗排到精排、精排到展现）或者用户主动筛选（比如 展现到点击、点击到转化、转化到复购）而逐步减少。ESMM 适用于成熟的电商推荐或者广告全链路预估系统。我们也希望本文的读者或者使用者如果在ESMM应用的实践中有任何困难，可随时与我们联系：maxiao.mx@alibaba-inc.com"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# ESMM 框架介绍\n",
    "\n",
    "## 算法原理\n",
    "\n",
    "ESMM 引入两个预估展现点击率（CTR）和展现后点击转化率（CTCVR）作为辅助任务。ESMM 将 pCVR 作为一个中间变量，并将其乘以 pCTR 得到 pCTCVR，而不是直接基于有偏的点击样本子集进行 CVR 模型训练。pCTCVR 和 pCTR 是在全空间中以所有展现样本估计的，因此衍生的 pCVR 也适用于全空间并且缓解了 \\$\\textbf{SSB}\\$ 问题。此外，CVR 任务的特征表示网络与 CTR 任务共享，后者用更丰富的样本进行训练。这种参数共享遵循特征表示迁移学习范式，并为缓解 \\$\\textbf{DS}\\$ 问题提供了显著的帮助。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "## 迁移学习\n",
    "正如 BASE 模型部分介绍的那样，Embedding Layer 将大规模稀疏输入映射到低维稠密向量中，它占据深度网络的大部分参数，需要大量的样本来进行训练。在 ESMM 中，CVR 网络的 Embedding 参数与 CTR 任务共享。它遵循特征表示转化学习范式。CTR 任务所有展现次数的样本规模比 CVR 任务要丰富多个量级。该参数共享机制使 ESMM 中的 CVR 网络可以从未点击的展现中学习，缓解了数据稀疏性问题。\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 全空间建模\n",
    "pCTR和pCTCVR是ESMM在全空间实际预估的变量。这种乘法形式使得三个关联和共同训练的分类器能够在训练期间利用数据的序列模式并相互传递信息。ESMM的损失函数如下，它由 CTR 和 CTCVR 任务中的两个损失项组成，这些任务通过所有展现次数的样本进行计算。\n",
    "\n",
    "\\begin{equation}\n",
    "\\begin{split}\n",
    "L(\\theta*{cvr}, \\theta*{ctr}) = \\sum*{i=1}^N l(y\\_i, f(\\textbf{x}*i;\\theta*{ctr})) + \\sum*{i=1}^N l(y\\_i&z\\_i, f(\\textbf{x}*i;\\theta*{ctr}) \\times f(\\textbf{x}*i;\\theta*{cvr}))\n",
    "\\end{split}\n",
    "\\end{equation}\n",
    "\n",
    "其中 \\$\\theta\\_{ctr}\\$ 和 \\$\\theta\\_{cvr}\\$ 是 CTR 和 CVR 网络的参数，l函数是交叉熵损失函数。\n",
    "在数学上，公式 Eq.（3) 将 \\$y \\rightarrow z\\$ 分解为两部分对应于 CTR 和 CTCVR 任务的标签，构造训练数据集如下：\n",
    "对于CTR任务，单击的展现被标记为\\$y = 1\\$，否则为 \\$y=0\\$；对于 CTCVR 任务，同时发生点击和转化事件的展现被标记为 \\$ y & z = 1 \\$ ，否则 \\$ y & z = 0 \\$，\\$y\\$ 和 \\$ y & z \\$ ，这实际上是利用点击和转化标签的序列依赖性。\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 结构扩展性\n",
    "\n",
    "它主要由两个子网组成：CVR 网络在图的左边部分和右边部分的 CTR 网络。 CVR 和 CTR 网络都采用与 BASE 模型相同的结构。 CTCVR 将 CVR 和 CTR 网络的输出结果相乘作为输出。其中每个子网络结果可以被替代为任意的分类预估网络。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# ESMM 训练示例"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 数据处理"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 下载数据集\n",
    "* 公开数据集的下载、使用和授权\n",
    "    * Ali-CCP：Alibaba Click and Conversion Prediction请参阅：[https://tianchi.aliyun.com/datalab/dataSet.html?dataId=408](https://tianchi.aliyun.com/datalab/dataSet.html?dataId=408)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import pandas as pd\n",
    "from sklearn.model_selection import train_test_split\n",
    "import numpy as np\n",
    "from collections import Counter\n",
    "import tensorflow as tf\n",
    "\n",
    "import os\n",
    "import pickle\n",
    "import re\n",
    "from tensorflow.python.ops import math_ops"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 先来看看数据"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 经过Step1&2后，采样2.5%的数据样例，其中训练和测试各是100万条\n",
    "esmm_public/ctr_cvr_data$ ll\n",
    "\n",
    "-rw-r--r--  1 maxiao  staff   95592738 Dec  4 08:56 sampled_common_features_skeleton_test_sample_feature_column.csv\n",
    "\n",
    "-rw-r--r--  1 maxiao  staff   74272695 Dec  4 08:53 sampled_common_features_skeleton_train_sample_feature_column.csv\n",
    "\n",
    "-rw-r--r--  1 maxiao  staff  165569695 Dec  4 08:53 sampled_sample_skeleton_test_sample_feature_column.csv\n",
    "\n",
    "-rw-r--r--  1 maxiao  staff  164016605 Dec  4 08:51 sampled_sample_skeleton_train_sample_feature_column.csv"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 样本骨架 数据结构\n",
    "<img src=\"assets/sample_skeleton.jpg\"/>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "D:\\Anaconda3\\envs\\CtrPredictDL\\lib\\site-packages\\ipykernel_launcher.py:5: FutureWarning: read_table is deprecated, use read_csv instead.\n",
      "  \"\"\"\n"
     ]
    },
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>sample_id</th>\n",
       "      <th>click</th>\n",
       "      <th>buy</th>\n",
       "      <th>md5</th>\n",
       "      <th>feature_num</th>\n",
       "      <th>ItemID</th>\n",
       "      <th>CategoryID</th>\n",
       "      <th>ShopID</th>\n",
       "      <th>NodeID</th>\n",
       "      <th>BrandID</th>\n",
       "      <th>Com_CateID</th>\n",
       "      <th>Com_ShopID</th>\n",
       "      <th>Com_BrandID</th>\n",
       "      <th>Com_NodeID</th>\n",
       "      <th>PID</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>20</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>bacff91692951881</td>\n",
       "      <td>15</td>\n",
       "      <td>6986709</td>\n",
       "      <td>8316590</td>\n",
       "      <td>8621426</td>\n",
       "      <td>9107695|9075968|9052327|9032767|9074649|9091748</td>\n",
       "      <td>9348026</td>\n",
       "      <td>9354837</td>\n",
       "      <td>9565193</td>\n",
       "      <td>9997471</td>\n",
       "      <td>10083008</td>\n",
       "      <td>9351665</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>124</td>\n",
       "      <td>1</td>\n",
       "      <td>0</td>\n",
       "      <td>bacff91692951881</td>\n",
       "      <td>11</td>\n",
       "      <td>5290285</td>\n",
       "      <td>8316589</td>\n",
       "      <td>8801026</td>\n",
       "      <td>9092320|9093422|9105534</td>\n",
       "      <td>9174649</td>\n",
       "      <td>9354836</td>\n",
       "      <td>9686171</td>\n",
       "      <td>9874111</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>9351665</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>155</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>bacff91692951881</td>\n",
       "      <td>16</td>\n",
       "      <td>7719639</td>\n",
       "      <td>8315276</td>\n",
       "      <td>8700437</td>\n",
       "      <td>9060166|9067562|9067906|9024508|9056445|903715...</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>9353608</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>10021801</td>\n",
       "      <td>9351665</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>194</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>bacff91692951881</td>\n",
       "      <td>11</td>\n",
       "      <td>8239102</td>\n",
       "      <td>8315277</td>\n",
       "      <td>8731751</td>\n",
       "      <td>9055739|9114104|9113741</td>\n",
       "      <td>9181078</td>\n",
       "      <td>9353609</td>\n",
       "      <td>9639217</td>\n",
       "      <td>9878755</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>9351665</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>197</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>bacff91692951881</td>\n",
       "      <td>10</td>\n",
       "      <td>4505957</td>\n",
       "      <td>8316758</td>\n",
       "      <td>8525993</td>\n",
       "      <td>9042317|9077687|9020906|9074519|9061381|9038129</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>9351665</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "   sample_id  click  buy               md5  feature_num   ItemID CategoryID  \\\n",
       "0         20      0    0  bacff91692951881           15  6986709    8316590   \n",
       "1        124      1    0  bacff91692951881           11  5290285    8316589   \n",
       "2        155      0    0  bacff91692951881           16  7719639    8315276   \n",
       "3        194      0    0  bacff91692951881           11  8239102    8315277   \n",
       "4        197      0    0  bacff91692951881           10  4505957    8316758   \n",
       "\n",
       "    ShopID                                             NodeID  BrandID  \\\n",
       "0  8621426    9107695|9075968|9052327|9032767|9074649|9091748  9348026   \n",
       "1  8801026                            9092320|9093422|9105534  9174649   \n",
       "2  8700437  9060166|9067562|9067906|9024508|9056445|903715...    <PAD>   \n",
       "3  8731751                            9055739|9114104|9113741  9181078   \n",
       "4  8525993    9042317|9077687|9020906|9074519|9061381|9038129    <PAD>   \n",
       "\n",
       "  Com_CateID Com_ShopID Com_BrandID Com_NodeID      PID  \n",
       "0    9354837    9565193     9997471   10083008  9351665  \n",
       "1    9354836    9686171     9874111      <PAD>  9351665  \n",
       "2    9353608      <PAD>       <PAD>   10021801  9351665  \n",
       "3    9353609    9639217     9878755      <PAD>  9351665  \n",
       "4      <PAD>      <PAD>       <PAD>      <PAD>  9351665  "
      ]
     },
     "execution_count": 2,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "sample_feature_columns = ['sample_id', 'click', 'buy', 'md5', 'feature_num', 'ItemID','CategoryID','ShopID','NodeID','BrandID','Com_CateID',\n",
    "                     'Com_ShopID','Com_BrandID','Com_NodeID','PID']\n",
    "train_sample_table = pd.read_table('./ctr_cvr_data/BuyWeight_sampled_sample_skeleton_train_sample_feature_column.csv', sep=',',\\\n",
    "                                  dtype={'ItemID': object, 'CategoryID': object, 'ShopID': object, 'PID': object},\\\n",
    "                                  header=0, names=None, engine = 'python')\n",
    "train_sample_table.head()\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "D:\\Anaconda3\\envs\\CtrPredictDL\\lib\\site-packages\\ipykernel_launcher.py:5: FutureWarning: read_table is deprecated, use read_csv instead.\n",
      "  \"\"\"\n"
     ]
    },
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>sample_id</th>\n",
       "      <th>click</th>\n",
       "      <th>buy</th>\n",
       "      <th>md5</th>\n",
       "      <th>feature_num</th>\n",
       "      <th>ItemID</th>\n",
       "      <th>CategoryID</th>\n",
       "      <th>ShopID</th>\n",
       "      <th>NodeID</th>\n",
       "      <th>BrandID</th>\n",
       "      <th>Com_CateID</th>\n",
       "      <th>Com_ShopID</th>\n",
       "      <th>Com_BrandID</th>\n",
       "      <th>Com_NodeID</th>\n",
       "      <th>PID</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>20</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>543b0cd53c7d5858</td>\n",
       "      <td>8</td>\n",
       "      <td>5802912</td>\n",
       "      <td>8316509</td>\n",
       "      <td>8814075</td>\n",
       "      <td>9044230|9078066|9072583</td>\n",
       "      <td>9286972</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>9351665</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>37</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>543b0cd53c7d5858</td>\n",
       "      <td>11</td>\n",
       "      <td>6628259</td>\n",
       "      <td>8315279</td>\n",
       "      <td>8746722</td>\n",
       "      <td>9103580|9109960|9113454|9113276|9029850</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>9353611</td>\n",
       "      <td>9649339</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>9351665</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>49</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>543b0cd53c7d5858</td>\n",
       "      <td>11</td>\n",
       "      <td>6972165</td>\n",
       "      <td>8317356</td>\n",
       "      <td>8331019</td>\n",
       "      <td>9072169|9078426|9073266|9056371|9049913|910983...</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>9351665</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>139</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>a2ea4295d36bc432</td>\n",
       "      <td>13</td>\n",
       "      <td>7772257</td>\n",
       "      <td>8316875</td>\n",
       "      <td>8424009</td>\n",
       "      <td>9039527|9018404|9026925|9107731|9031472|906277...</td>\n",
       "      <td>9279217</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>9351665</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>188</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>4b7c891bbf454188</td>\n",
       "      <td>8</td>\n",
       "      <td>5322018</td>\n",
       "      <td>8317527</td>\n",
       "      <td>8899037</td>\n",
       "      <td>9095770|9063374|9105026</td>\n",
       "      <td>9248573</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>9351666</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "   sample_id  click  buy               md5  feature_num   ItemID CategoryID  \\\n",
       "0         20      0    0  543b0cd53c7d5858            8  5802912    8316509   \n",
       "1         37      0    0  543b0cd53c7d5858           11  6628259    8315279   \n",
       "2         49      0    0  543b0cd53c7d5858           11  6972165    8317356   \n",
       "3        139      0    0  a2ea4295d36bc432           13  7772257    8316875   \n",
       "4        188      0    0  4b7c891bbf454188            8  5322018    8317527   \n",
       "\n",
       "    ShopID                                             NodeID  BrandID  \\\n",
       "0  8814075                            9044230|9078066|9072583  9286972   \n",
       "1  8746722            9103580|9109960|9113454|9113276|9029850    <PAD>   \n",
       "2  8331019  9072169|9078426|9073266|9056371|9049913|910983...    <PAD>   \n",
       "3  8424009  9039527|9018404|9026925|9107731|9031472|906277...  9279217   \n",
       "4  8899037                            9095770|9063374|9105026  9248573   \n",
       "\n",
       "  Com_CateID Com_ShopID Com_BrandID Com_NodeID      PID  \n",
       "0      <PAD>      <PAD>       <PAD>      <PAD>  9351665  \n",
       "1    9353611    9649339       <PAD>      <PAD>  9351665  \n",
       "2      <PAD>      <PAD>       <PAD>      <PAD>  9351665  \n",
       "3      <PAD>      <PAD>       <PAD>      <PAD>  9351665  \n",
       "4      <PAD>      <PAD>       <PAD>      <PAD>  9351666  "
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "sample_feature_columns = ['sample_id', 'click', 'buy', 'md5', 'feature_num', 'ItemID','CategoryID','ShopID','NodeID','BrandID','Com_CateID',\n",
    "                     'Com_ShopID','Com_BrandID','Com_NodeID','PID']\n",
    "test_sample_table = pd.read_table('./ctr_cvr_data/BuyWeight_sampled_sample_skeleton_test_sample_feature_column.csv', sep=',', \\\n",
    "                                  dtype={'ItemID': object, 'CategoryID': object, 'ShopID': object, 'PID': object},\\\n",
    "                                  header=0, names=None, engine = 'python')\n",
    "test_sample_table.head()\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Common Feature 数据结构\n",
    "<img src=\"assets/common_feature.jpg\"/>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "D:\\Anaconda3\\envs\\CtrPredictDL\\lib\\site-packages\\ipykernel_launcher.py:4: FutureWarning: read_table is deprecated, use read_csv instead.\n",
      "  after removing the cwd from sys.path.\n"
     ]
    },
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>md5</th>\n",
       "      <th>feature_num</th>\n",
       "      <th>UserID</th>\n",
       "      <th>User_CateIDs</th>\n",
       "      <th>User_ShopIDs</th>\n",
       "      <th>User_BrandIDs</th>\n",
       "      <th>User_NodeIDs</th>\n",
       "      <th>User_Cluster</th>\n",
       "      <th>User_ClusterID</th>\n",
       "      <th>User_Gender</th>\n",
       "      <th>User_Age</th>\n",
       "      <th>User_Level1</th>\n",
       "      <th>User_Level2</th>\n",
       "      <th>User_Occupation</th>\n",
       "      <th>User_Geo</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>00026b381ec790a3</td>\n",
       "      <td>563</td>\n",
       "      <td>413122</td>\n",
       "      <td>451119|450877|450656|449078|450657|449209|4492...</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>3561749|3760313|3683999|3856639|3775121|371605...</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>3438658</td>\n",
       "      <td>3438757</td>\n",
       "      <td>3438768</td>\n",
       "      <td>3438774</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>3438782</td>\n",
       "      <td>3864885</td>\n",
       "      <td>3864890</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>000355a921e0225f</td>\n",
       "      <td>452</td>\n",
       "      <td>426366</td>\n",
       "      <td>451119|451117|450954|450877|449079|451120|4506...</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>3543869|3761098|3556787|3538704|3599666|378386...</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>3438686</td>\n",
       "      <td>3438762</td>\n",
       "      <td>3438769</td>\n",
       "      <td>3438774</td>\n",
       "      <td>3438778</td>\n",
       "      <td>3438782</td>\n",
       "      <td>3864885</td>\n",
       "      <td>3864890</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>0003dbee3f6f7e81</td>\n",
       "      <td>317</td>\n",
       "      <td>258626</td>\n",
       "      <td>450954|445398|449079|451120|449082|450276|4510...</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>3808035|3856639|3667965|3633395|3696256|356905...</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>3438658</td>\n",
       "      <td>3438760</td>\n",
       "      <td>3438769</td>\n",
       "      <td>3438772</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>3438782</td>\n",
       "      <td>3864885</td>\n",
       "      <td>3864889</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>00093b560daa682c</td>\n",
       "      <td>428</td>\n",
       "      <td>223293</td>\n",
       "      <td>445232|449090|450954|455013|449079|446075|4502...</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>3782769|3858194|3856639|3726796|3643214|349068...</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>3438665</td>\n",
       "      <td>3438761</td>\n",
       "      <td>3438769</td>\n",
       "      <td>3438773</td>\n",
       "      <td>3438777</td>\n",
       "      <td>3438782</td>\n",
       "      <td>3864885</td>\n",
       "      <td>3864887</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>000d2af332a07ad7</td>\n",
       "      <td>464</td>\n",
       "      <td>413926</td>\n",
       "      <td>450880|449925|449308|449813|449360|445206|4506...</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>3641085|3652956|3779828|3667401|3658148|366395...</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>3438684</td>\n",
       "      <td>3438762</td>\n",
       "      <td>3438769</td>\n",
       "      <td>3438774</td>\n",
       "      <td>3438778</td>\n",
       "      <td>3438782</td>\n",
       "      <td>3864885</td>\n",
       "      <td>3864887</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "                md5  feature_num  UserID  \\\n",
       "0  00026b381ec790a3          563  413122   \n",
       "1  000355a921e0225f          452  426366   \n",
       "2  0003dbee3f6f7e81          317  258626   \n",
       "3  00093b560daa682c          428  223293   \n",
       "4  000d2af332a07ad7          464  413926   \n",
       "\n",
       "                                        User_CateIDs User_ShopIDs  \\\n",
       "0  451119|450877|450656|449078|450657|449209|4492...        <PAD>   \n",
       "1  451119|451117|450954|450877|449079|451120|4506...        <PAD>   \n",
       "2  450954|445398|449079|451120|449082|450276|4510...        <PAD>   \n",
       "3  445232|449090|450954|455013|449079|446075|4502...        <PAD>   \n",
       "4  450880|449925|449308|449813|449360|445206|4506...        <PAD>   \n",
       "\n",
       "                                       User_BrandIDs User_NodeIDs  \\\n",
       "0  3561749|3760313|3683999|3856639|3775121|371605...        <PAD>   \n",
       "1  3543869|3761098|3556787|3538704|3599666|378386...        <PAD>   \n",
       "2  3808035|3856639|3667965|3633395|3696256|356905...        <PAD>   \n",
       "3  3782769|3858194|3856639|3726796|3643214|349068...        <PAD>   \n",
       "4  3641085|3652956|3779828|3667401|3658148|366395...        <PAD>   \n",
       "\n",
       "  User_Cluster User_ClusterID User_Gender User_Age User_Level1 User_Level2  \\\n",
       "0      3438658        3438757     3438768  3438774       <PAD>     3438782   \n",
       "1      3438686        3438762     3438769  3438774     3438778     3438782   \n",
       "2      3438658        3438760     3438769  3438772       <PAD>     3438782   \n",
       "3      3438665        3438761     3438769  3438773     3438777     3438782   \n",
       "4      3438684        3438762     3438769  3438774     3438778     3438782   \n",
       "\n",
       "  User_Occupation User_Geo  \n",
       "0         3864885  3864890  \n",
       "1         3864885  3864890  \n",
       "2         3864885  3864889  \n",
       "3         3864885  3864887  \n",
       "4         3864885  3864887  "
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "common_feature_columns = ['md5', 'feature_num', 'UserID', 'User_CateIDs', 'User_ShopIDs', 'User_BrandIDs', 'User_NodeIDs', 'User_Cluster', \n",
    "                     'User_ClusterID', 'User_Gender', 'User_Age', 'User_Level1', 'User_Level2', \n",
    "                     'User_Occupation', 'User_Geo']\n",
    "train_common_features = pd.read_table('./ctr_cvr_data/BuyWeight_sampled_common_features_skeleton_train_sample_feature_column.csv', sep=',', header=0, names=None, engine = 'python')\n",
    "train_common_features.head()\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "D:\\Anaconda3\\envs\\CtrPredictDL\\lib\\site-packages\\ipykernel_launcher.py:4: FutureWarning: read_table is deprecated, use read_csv instead.\n",
      "  after removing the cwd from sys.path.\n"
     ]
    },
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>md5</th>\n",
       "      <th>feature_num</th>\n",
       "      <th>UserID</th>\n",
       "      <th>User_CateIDs</th>\n",
       "      <th>User_ShopIDs</th>\n",
       "      <th>User_BrandIDs</th>\n",
       "      <th>User_NodeIDs</th>\n",
       "      <th>User_Cluster</th>\n",
       "      <th>User_ClusterID</th>\n",
       "      <th>User_Gender</th>\n",
       "      <th>User_Age</th>\n",
       "      <th>User_Level1</th>\n",
       "      <th>User_Level2</th>\n",
       "      <th>User_Occupation</th>\n",
       "      <th>User_Geo</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>810d5366057b3f58</td>\n",
       "      <td>1025</td>\n",
       "      <td>412797</td>\n",
       "      <td>451311|451286|451133|450954|450656|446913|4506...</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>3592299|3449840|3650730|3650822|3792091|352433...</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>3438725</td>\n",
       "      <td>3438760</td>\n",
       "      <td>3438769</td>\n",
       "      <td>3438772</td>\n",
       "      <td>3438778</td>\n",
       "      <td>3438782</td>\n",
       "      <td>3864885</td>\n",
       "      <td>3864888</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>0001970d9ebf72cf</td>\n",
       "      <td>126</td>\n",
       "      <td>64841</td>\n",
       "      <td>451130|450658|450656|451639|453921|453929|4490...</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>3704348|3704152|3852278|3849481|3580992|366112...</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>3438658</td>\n",
       "      <td>3438756</td>\n",
       "      <td>3438769</td>\n",
       "      <td>3438771</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>3438780</td>\n",
       "      <td>3864885</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>0010d0b9633bb5b0</td>\n",
       "      <td>250</td>\n",
       "      <td>66015</td>\n",
       "      <td>455028|451998|451100|445269|445990|450099|4557...</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>3520924|3505215|3588720|3541711|3801132|382945...</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>3438670</td>\n",
       "      <td>3438756</td>\n",
       "      <td>3438769</td>\n",
       "      <td>3438771</td>\n",
       "      <td>3438777</td>\n",
       "      <td>3438782</td>\n",
       "      <td>3864886</td>\n",
       "      <td>3864889</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>001459b610a7c186</td>\n",
       "      <td>395</td>\n",
       "      <td>235356</td>\n",
       "      <td>450880|456589|450870|450658|451641|451639|4508...</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>3765068|3620179|3619511|3668302|3610026|378643...</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>001efa0ef1001dd1</td>\n",
       "      <td>849</td>\n",
       "      <td>131668</td>\n",
       "      <td>445600|450229|449070|449317|450658|449178|4509...</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>3628671|3853135|3805119|3848839|3651862|372880...</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>3438685</td>\n",
       "      <td>3438762</td>\n",
       "      <td>3438769</td>\n",
       "      <td>3438774</td>\n",
       "      <td>3438778</td>\n",
       "      <td>3438782</td>\n",
       "      <td>3864885</td>\n",
       "      <td>3864888</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "                md5  feature_num  UserID  \\\n",
       "0  810d5366057b3f58         1025  412797   \n",
       "1  0001970d9ebf72cf          126   64841   \n",
       "2  0010d0b9633bb5b0          250   66015   \n",
       "3  001459b610a7c186          395  235356   \n",
       "4  001efa0ef1001dd1          849  131668   \n",
       "\n",
       "                                        User_CateIDs User_ShopIDs  \\\n",
       "0  451311|451286|451133|450954|450656|446913|4506...        <PAD>   \n",
       "1  451130|450658|450656|451639|453921|453929|4490...        <PAD>   \n",
       "2  455028|451998|451100|445269|445990|450099|4557...        <PAD>   \n",
       "3  450880|456589|450870|450658|451641|451639|4508...        <PAD>   \n",
       "4  445600|450229|449070|449317|450658|449178|4509...        <PAD>   \n",
       "\n",
       "                                       User_BrandIDs User_NodeIDs  \\\n",
       "0  3592299|3449840|3650730|3650822|3792091|352433...        <PAD>   \n",
       "1  3704348|3704152|3852278|3849481|3580992|366112...        <PAD>   \n",
       "2  3520924|3505215|3588720|3541711|3801132|382945...        <PAD>   \n",
       "3  3765068|3620179|3619511|3668302|3610026|378643...        <PAD>   \n",
       "4  3628671|3853135|3805119|3848839|3651862|372880...        <PAD>   \n",
       "\n",
       "  User_Cluster User_ClusterID User_Gender User_Age User_Level1 User_Level2  \\\n",
       "0      3438725        3438760     3438769  3438772     3438778     3438782   \n",
       "1      3438658        3438756     3438769  3438771       <PAD>     3438780   \n",
       "2      3438670        3438756     3438769  3438771     3438777     3438782   \n",
       "3        <PAD>          <PAD>       <PAD>    <PAD>       <PAD>       <PAD>   \n",
       "4      3438685        3438762     3438769  3438774     3438778     3438782   \n",
       "\n",
       "  User_Occupation User_Geo  \n",
       "0         3864885  3864888  \n",
       "1         3864885    <PAD>  \n",
       "2         3864886  3864889  \n",
       "3           <PAD>    <PAD>  \n",
       "4         3864885  3864888  "
      ]
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "common_feature_columns = ['md5', 'feature_num', 'UserID', 'User_CateIDs', 'User_ShopIDs', 'User_BrandIDs', 'User_NodeIDs', 'User_Cluster', \n",
    "                     'User_ClusterID', 'User_Gender', 'User_Age', 'User_Level1', 'User_Level2', \n",
    "                     'User_Occupation', 'User_Geo']\n",
    "test_common_features = pd.read_table('./ctr_cvr_data/BuyWeight_sampled_common_features_skeleton_test_sample_feature_column.csv', sep=',', header=0, names=None, engine = 'python')\n",
    "test_common_features.head()\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 两表join示例"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "(207104, 15)\n",
      "(88702, 15)\n",
      "(1085721, 15)\n",
      "(541074, 15)\n",
      "(207104, 29)\n",
      "Index(['sample_id', 'click', 'buy', 'md5', 'feature_num_x', 'ItemID',\n",
      "       'CategoryID', 'ShopID', 'NodeID', 'BrandID', 'Com_CateID', 'Com_ShopID',\n",
      "       'Com_BrandID', 'Com_NodeID', 'PID', 'feature_num_y', 'UserID',\n",
      "       'User_CateIDs', 'User_ShopIDs', 'User_BrandIDs', 'User_NodeIDs',\n",
      "       'User_Cluster', 'User_ClusterID', 'User_Gender', 'User_Age',\n",
      "       'User_Level1', 'User_Level2', 'User_Occupation', 'User_Geo'],\n",
      "      dtype='object')\n"
     ]
    },
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>sample_id</th>\n",
       "      <th>click</th>\n",
       "      <th>buy</th>\n",
       "      <th>md5</th>\n",
       "      <th>feature_num_x</th>\n",
       "      <th>ItemID</th>\n",
       "      <th>CategoryID</th>\n",
       "      <th>ShopID</th>\n",
       "      <th>NodeID</th>\n",
       "      <th>BrandID</th>\n",
       "      <th>...</th>\n",
       "      <th>User_BrandIDs</th>\n",
       "      <th>User_NodeIDs</th>\n",
       "      <th>User_Cluster</th>\n",
       "      <th>User_ClusterID</th>\n",
       "      <th>User_Gender</th>\n",
       "      <th>User_Age</th>\n",
       "      <th>User_Level1</th>\n",
       "      <th>User_Level2</th>\n",
       "      <th>User_Occupation</th>\n",
       "      <th>User_Geo</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>20</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>bacff91692951881</td>\n",
       "      <td>15</td>\n",
       "      <td>6986709</td>\n",
       "      <td>8316590</td>\n",
       "      <td>8621426</td>\n",
       "      <td>9107695|9075968|9052327|9032767|9074649|9091748</td>\n",
       "      <td>9348026</td>\n",
       "      <td>...</td>\n",
       "      <td>3534361|3654192|3503151|3722909|3650730|376454...</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>3438658</td>\n",
       "      <td>3438762</td>\n",
       "      <td>3438769</td>\n",
       "      <td>3438774</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>3438782</td>\n",
       "      <td>3864885</td>\n",
       "      <td>3864887</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>124</td>\n",
       "      <td>1</td>\n",
       "      <td>0</td>\n",
       "      <td>bacff91692951881</td>\n",
       "      <td>11</td>\n",
       "      <td>5290285</td>\n",
       "      <td>8316589</td>\n",
       "      <td>8801026</td>\n",
       "      <td>9092320|9093422|9105534</td>\n",
       "      <td>9174649</td>\n",
       "      <td>...</td>\n",
       "      <td>3534361|3654192|3503151|3722909|3650730|376454...</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>3438658</td>\n",
       "      <td>3438762</td>\n",
       "      <td>3438769</td>\n",
       "      <td>3438774</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>3438782</td>\n",
       "      <td>3864885</td>\n",
       "      <td>3864887</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>155</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>bacff91692951881</td>\n",
       "      <td>16</td>\n",
       "      <td>7719639</td>\n",
       "      <td>8315276</td>\n",
       "      <td>8700437</td>\n",
       "      <td>9060166|9067562|9067906|9024508|9056445|903715...</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>...</td>\n",
       "      <td>3534361|3654192|3503151|3722909|3650730|376454...</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>3438658</td>\n",
       "      <td>3438762</td>\n",
       "      <td>3438769</td>\n",
       "      <td>3438774</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>3438782</td>\n",
       "      <td>3864885</td>\n",
       "      <td>3864887</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>194</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>bacff91692951881</td>\n",
       "      <td>11</td>\n",
       "      <td>8239102</td>\n",
       "      <td>8315277</td>\n",
       "      <td>8731751</td>\n",
       "      <td>9055739|9114104|9113741</td>\n",
       "      <td>9181078</td>\n",
       "      <td>...</td>\n",
       "      <td>3534361|3654192|3503151|3722909|3650730|376454...</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>3438658</td>\n",
       "      <td>3438762</td>\n",
       "      <td>3438769</td>\n",
       "      <td>3438774</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>3438782</td>\n",
       "      <td>3864885</td>\n",
       "      <td>3864887</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>197</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>bacff91692951881</td>\n",
       "      <td>10</td>\n",
       "      <td>4505957</td>\n",
       "      <td>8316758</td>\n",
       "      <td>8525993</td>\n",
       "      <td>9042317|9077687|9020906|9074519|9061381|9038129</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>...</td>\n",
       "      <td>3534361|3654192|3503151|3722909|3650730|376454...</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>3438658</td>\n",
       "      <td>3438762</td>\n",
       "      <td>3438769</td>\n",
       "      <td>3438774</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>3438782</td>\n",
       "      <td>3864885</td>\n",
       "      <td>3864887</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "<p>5 rows × 29 columns</p>\n",
       "</div>"
      ],
      "text/plain": [
       "   sample_id  click  buy               md5  feature_num_x   ItemID CategoryID  \\\n",
       "0         20      0    0  bacff91692951881             15  6986709    8316590   \n",
       "1        124      1    0  bacff91692951881             11  5290285    8316589   \n",
       "2        155      0    0  bacff91692951881             16  7719639    8315276   \n",
       "3        194      0    0  bacff91692951881             11  8239102    8315277   \n",
       "4        197      0    0  bacff91692951881             10  4505957    8316758   \n",
       "\n",
       "    ShopID                                             NodeID  BrandID  ...  \\\n",
       "0  8621426    9107695|9075968|9052327|9032767|9074649|9091748  9348026  ...   \n",
       "1  8801026                            9092320|9093422|9105534  9174649  ...   \n",
       "2  8700437  9060166|9067562|9067906|9024508|9056445|903715...    <PAD>  ...   \n",
       "3  8731751                            9055739|9114104|9113741  9181078  ...   \n",
       "4  8525993    9042317|9077687|9020906|9074519|9061381|9038129    <PAD>  ...   \n",
       "\n",
       "                                       User_BrandIDs User_NodeIDs  \\\n",
       "0  3534361|3654192|3503151|3722909|3650730|376454...        <PAD>   \n",
       "1  3534361|3654192|3503151|3722909|3650730|376454...        <PAD>   \n",
       "2  3534361|3654192|3503151|3722909|3650730|376454...        <PAD>   \n",
       "3  3534361|3654192|3503151|3722909|3650730|376454...        <PAD>   \n",
       "4  3534361|3654192|3503151|3722909|3650730|376454...        <PAD>   \n",
       "\n",
       "  User_Cluster User_ClusterID User_Gender  User_Age User_Level1 User_Level2  \\\n",
       "0      3438658        3438762     3438769   3438774       <PAD>     3438782   \n",
       "1      3438658        3438762     3438769   3438774       <PAD>     3438782   \n",
       "2      3438658        3438762     3438769   3438774       <PAD>     3438782   \n",
       "3      3438658        3438762     3438769   3438774       <PAD>     3438782   \n",
       "4      3438658        3438762     3438769   3438774       <PAD>     3438782   \n",
       "\n",
       "  User_Occupation User_Geo  \n",
       "0         3864885  3864887  \n",
       "1         3864885  3864887  \n",
       "2         3864885  3864887  \n",
       "3         3864885  3864887  \n",
       "4         3864885  3864887  \n",
       "\n",
       "[5 rows x 29 columns]"
      ]
     },
     "execution_count": 9,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "print(train_sample_table.shape)\n",
    "print(train_common_features.shape)\n",
    "\n",
    "print(test_sample_table.shape)\n",
    "print(test_common_features.shape)\n",
    "\n",
    "merge_data = pd.merge(train_sample_table, train_common_features, on='md5',how='inner')\n",
    "\n",
    "print(merge_data.shape)\n",
    "print(merge_data.columns)\n",
    "merge_data.head()\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 实现数据预处理"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>sample_id</th>\n",
       "      <th>click</th>\n",
       "      <th>buy</th>\n",
       "      <th>md5</th>\n",
       "      <th>feature_num</th>\n",
       "      <th>ItemID</th>\n",
       "      <th>CategoryID</th>\n",
       "      <th>ShopID</th>\n",
       "      <th>NodeID</th>\n",
       "      <th>BrandID</th>\n",
       "      <th>Com_CateID</th>\n",
       "      <th>Com_ShopID</th>\n",
       "      <th>Com_BrandID</th>\n",
       "      <th>Com_NodeID</th>\n",
       "      <th>PID</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>20</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>bacff91692951881</td>\n",
       "      <td>15</td>\n",
       "      <td>6986709</td>\n",
       "      <td>8316590</td>\n",
       "      <td>8621426</td>\n",
       "      <td>9107695|9075968|9052327|9032767|9074649|9091748</td>\n",
       "      <td>9348026</td>\n",
       "      <td>9354837</td>\n",
       "      <td>9565193</td>\n",
       "      <td>9997471</td>\n",
       "      <td>10083008</td>\n",
       "      <td>9351665</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>124</td>\n",
       "      <td>1</td>\n",
       "      <td>0</td>\n",
       "      <td>bacff91692951881</td>\n",
       "      <td>11</td>\n",
       "      <td>5290285</td>\n",
       "      <td>8316589</td>\n",
       "      <td>8801026</td>\n",
       "      <td>9092320|9093422|9105534</td>\n",
       "      <td>9174649</td>\n",
       "      <td>9354836</td>\n",
       "      <td>9686171</td>\n",
       "      <td>9874111</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>9351665</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>155</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>bacff91692951881</td>\n",
       "      <td>16</td>\n",
       "      <td>7719639</td>\n",
       "      <td>8315276</td>\n",
       "      <td>8700437</td>\n",
       "      <td>9060166|9067562|9067906|9024508|9056445|903715...</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>9353608</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>10021801</td>\n",
       "      <td>9351665</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>194</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>bacff91692951881</td>\n",
       "      <td>11</td>\n",
       "      <td>8239102</td>\n",
       "      <td>8315277</td>\n",
       "      <td>8731751</td>\n",
       "      <td>9055739|9114104|9113741</td>\n",
       "      <td>9181078</td>\n",
       "      <td>9353609</td>\n",
       "      <td>9639217</td>\n",
       "      <td>9878755</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>9351665</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>197</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>bacff91692951881</td>\n",
       "      <td>10</td>\n",
       "      <td>4505957</td>\n",
       "      <td>8316758</td>\n",
       "      <td>8525993</td>\n",
       "      <td>9042317|9077687|9020906|9074519|9061381|9038129</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>&lt;PAD&gt;</td>\n",
       "      <td>9351665</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "   sample_id  click  buy               md5  feature_num   ItemID CategoryID  \\\n",
       "0         20      0    0  bacff91692951881           15  6986709    8316590   \n",
       "1        124      1    0  bacff91692951881           11  5290285    8316589   \n",
       "2        155      0    0  bacff91692951881           16  7719639    8315276   \n",
       "3        194      0    0  bacff91692951881           11  8239102    8315277   \n",
       "4        197      0    0  bacff91692951881           10  4505957    8316758   \n",
       "\n",
       "    ShopID                                             NodeID  BrandID  \\\n",
       "0  8621426    9107695|9075968|9052327|9032767|9074649|9091748  9348026   \n",
       "1  8801026                            9092320|9093422|9105534  9174649   \n",
       "2  8700437  9060166|9067562|9067906|9024508|9056445|903715...    <PAD>   \n",
       "3  8731751                            9055739|9114104|9113741  9181078   \n",
       "4  8525993    9042317|9077687|9020906|9074519|9061381|9038129    <PAD>   \n",
       "\n",
       "  Com_CateID Com_ShopID Com_BrandID Com_NodeID      PID  \n",
       "0    9354837    9565193     9997471   10083008  9351665  \n",
       "1    9354836    9686171     9874111      <PAD>  9351665  \n",
       "2    9353608      <PAD>       <PAD>   10021801  9351665  \n",
       "3    9353609    9639217     9878755      <PAD>  9351665  \n",
       "4      <PAD>      <PAD>       <PAD>      <PAD>  9351665  "
      ]
     },
     "execution_count": 11,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "train_sample_table.head()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "sample_id       int64\n",
      "click           int64\n",
      "buy             int64\n",
      "md5            object\n",
      "feature_num     int64\n",
      "ItemID         object\n",
      "CategoryID     object\n",
      "ShopID         object\n",
      "NodeID         object\n",
      "BrandID        object\n",
      "Com_CateID     object\n",
      "Com_ShopID     object\n",
      "Com_BrandID    object\n",
      "Com_NodeID     object\n",
      "PID            object\n",
      "dtype: object\n"
     ]
    }
   ],
   "source": [
    "# 打印Column和Types，确保Train和测试集可以一起序列化\n",
    "# train_sample_table['ItemID'].head()\n",
    "# print(test_sample_table.head()['ItemID'])\n",
    "# print(train_common_features.head()['UserID'])\n",
    "# print(test_sample_table.dtypes)\n",
    "# print(train_sample_table.dtypes)\n",
    "#print()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "Index(['sample_id', 'click', 'buy', 'md5', 'feature_num', 'ItemID',\n",
       "       'CategoryID', 'ShopID', 'NodeID', 'BrandID', 'Com_CateID', 'Com_ShopID',\n",
       "       'Com_BrandID', 'Com_NodeID', 'PID'],\n",
       "      dtype='object')"
      ]
     },
     "execution_count": 20,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "train_common_features.columns\n",
    "train_common_features['feature_num'].head()\n",
    "train_sample_table.columns"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "127394\n",
      "4464\n",
      "86685\n",
      "161331\n",
      "40627\n",
      "31117\n",
      "17482\n",
      "65748\n",
      "3\n",
      "value train 86685\n",
      "value test 212293\n",
      "inner product 69262\n"
     ]
    }
   ],
   "source": [
    "# 打印Unique ID数\n",
    "value1 = set(train_common_features['UserID'].tolist())\n",
    "print(len(train_sample_table['ItemID'].unique()))\n",
    "print(len(train_sample_table['CategoryID'].unique()))\n",
    "print(len(train_sample_table['ShopID'].unique()))\n",
    "print(len(train_sample_table['NodeID'].unique()))\n",
    "print(len(train_sample_table['BrandID'].unique()))\n",
    "print(len(train_sample_table['Com_ShopID'].unique()))\n",
    "print(len(train_sample_table['Com_BrandID'].unique()))\n",
    "print(len(train_sample_table['Com_NodeID'].unique()))\n",
    "print(len(train_sample_table['PID'].unique()))\n",
    "\n",
    "\n",
    "#11176 640062 90 6111 258552 101090 4695 91412 43051 3\n",
    "\n",
    "value1 = set(train_sample_table['ShopID'].tolist())\n",
    "value2 = set(test_sample_table['ShopID'].tolist())\n",
    "\n",
    "# value1 = set(train_common_features['UserID'].tolist())\n",
    "# value2 = set(test_common_features['UserID'].tolist())\n",
    "print(\"value train\",len(value1))\n",
    "print(\"value test\",len(value2))\n",
    "print(\"inner product\",len(value1&value2))\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {},
   "outputs": [],
   "source": [
    "\n",
    "def load_ESMM_Train_and_Test_Data():\n",
    "    \"\"\"\n",
    "    Load Dataset from File\n",
    "    \"\"\"\n",
    "    sample_feature_columns = ['sample_id', 'click', 'buy', 'md5', 'feature_num', 'ItemID','CategoryID','ShopID','NodeID','BrandID','Com_CateID',\n",
    "                     'Com_ShopID','Com_BrandID','Com_NodeID','PID']\n",
    "    \n",
    "    common_feature_columns = ['md5', 'feature_num', 'UserID', 'User_CateIDs', 'User_ShopIDs', 'User_BrandIDs', 'User_NodeIDs', 'User_Cluster', \n",
    "                     'User_ClusterID', 'User_Gender', 'User_Age', 'User_Level1', 'User_Level2', \n",
    "                     'User_Occupation', 'User_Geo']\n",
    "    \n",
    "    # 强制转化为其中部分列为object，是因为训练和测试某些列，Pandas load类型不一致，影响后面的序列化\n",
    "    train_sample_table = pd.read_table('./ctr_cvr_data/BuyWeight_sampled_sample_skeleton_train_sample_feature_column.csv', sep=',',\\\n",
    "                                  dtype={'ItemID': object, 'CategoryID': object, 'ShopID': object, 'PID': object},\\\n",
    "                                  header=0, names=None, engine = 'python')\n",
    "    train_common_features = pd.read_table('./ctr_cvr_data/BuyWeight_sampled_common_features_skeleton_train_sample_feature_column.csv', sep=',', header=0, names=None, engine = 'python')\n",
    "    \n",
    "    test_sample_table = pd.read_table('./ctr_cvr_data/BuyWeight_sampled_sample_skeleton_test_sample_feature_column.csv', sep=',', \\\n",
    "                                  dtype={'ItemID': object, 'CategoryID': object, 'ShopID': object, 'PID': object},\\\n",
    "                                  header=0, names=None, engine = 'python')\n",
    "    test_common_features = pd.read_table('./ctr_cvr_data/BuyWeight_sampled_common_features_skeleton_test_sample_feature_column.csv', sep=',', header=0, names=None, engine = 'python')\n",
    "    \n",
    "    #itemID转数字字典\n",
    "    ItemID_set = set()\n",
    "    for val in train_sample_table['ItemID'].str.split('|'):\n",
    "        ItemID_set.update(val)\n",
    "    for val in test_sample_table['ItemID'].str.split('|'):\n",
    "        ItemID_set.update(val)\n",
    "    ItemID_set.add('<PAD>')\n",
    "    ItemID2int = {val:ii for ii, val in enumerate(ItemID_set)}\n",
    "    #itemID 转成等长数字列表，示例，其实itemID是One Hot的，不需要此操作\n",
    "    ItemID_map = {val:[ItemID2int[row] for row in val.split('|')]  \\\n",
    "                  for ii,val in enumerate(set(train_sample_table['ItemID']))}\n",
    "    test_ItemID_map = {val:[ItemID2int[row] for row in val.split('|')]  \\\n",
    "                  for ii,val in enumerate(set(test_sample_table['ItemID']))}\n",
    "    # merge train & test\n",
    "    ItemID_map.update(test_ItemID_map)\n",
    "    ItemID_map_max_len = 1\n",
    "    print(\"ItemID_map max_len:\", ItemID_map_max_len)\n",
    "    for key in ItemID_map:\n",
    "        for cnt in range(ItemID_map_max_len - len(ItemID_map[key])):\n",
    "            ItemID_map[key].insert(len(ItemID_map[key]) + cnt,itemID2int['<PAD>'])\n",
    "    train_sample_table['ItemID'] = train_sample_table['ItemID'].map(ItemID_map)\n",
    "    test_sample_table['ItemID'] = test_sample_table['ItemID'].map(ItemID_map)\n",
    "    print(\"ItemID finish\")\n",
    "    \n",
    "    \n",
    "    #User_CateIDs转数字字典\n",
    "    User_CateIDs_set = set()\n",
    "    for val in train_common_features['User_CateIDs'].str.split('|'):\n",
    "        User_CateIDs_set.update(val)\n",
    "    for val in test_common_features['User_CateIDs'].str.split('|'):\n",
    "        User_CateIDs_set.update(val)\n",
    "    User_CateIDs_set.add('<PAD>')\n",
    "    User_CateIDs2int = {val:ii for ii, val in enumerate(User_CateIDs_set)}\n",
    "    #User_CateIDs 转成等长数字列表\n",
    "    User_CateIDs_map = {val:[User_CateIDs2int[row] for row in val.split('|')]  \\\n",
    "                  for ii,val in enumerate(set(train_common_features['User_CateIDs']))}\n",
    "    test_User_CateIDs_map = {val:[User_CateIDs2int[row] for row in val.split('|')]  \\\n",
    "                  for ii,val in enumerate(set(test_common_features['User_CateIDs']))}\n",
    "    # merge train & test\n",
    "    User_CateIDs_map.update(test_User_CateIDs_map)\n",
    "    User_CateIDs_map_max_len = 100\n",
    "    print(\"User_CateIDs_map max_len:\", User_CateIDs_map_max_len)\n",
    "    for key in User_CateIDs_map:\n",
    "        for cnt in range(User_CateIDs_map_max_len - len(User_CateIDs_map[key])):\n",
    "            User_CateIDs_map[key].insert(len(User_CateIDs_map[key]) + cnt,User_CateIDs2int['<PAD>'])\n",
    "    train_common_features['User_CateIDs'] = train_common_features['User_CateIDs'].map(User_CateIDs_map)\n",
    "    test_common_features['User_CateIDs'] = test_common_features['User_CateIDs'].map(User_CateIDs_map)\n",
    "    print(\"User_CateIDs finish\")\n",
    "    \n",
    "    #User_BrandIDs转数字字典\n",
    "    User_BrandIDs_set = set()\n",
    "    for val in train_common_features['User_BrandIDs'].str.split('|'):\n",
    "        User_BrandIDs_set.update(val)\n",
    "    for val in test_common_features['User_BrandIDs'].str.split('|'):\n",
    "        User_BrandIDs_set.update(val)\n",
    "    User_BrandIDs_set.add('<PAD>')\n",
    "    User_BrandIDs2int = {val:ii for ii, val in enumerate(User_BrandIDs_set)}\n",
    "    #User_BrandIDs 转成等长数字列表\n",
    "    User_BrandIDs_map = {val:[User_BrandIDs2int[row] for row in val.split('|')]  \\\n",
    "                  for ii,val in enumerate(set(train_common_features['User_BrandIDs']))}\n",
    "    test_User_BrandIDs_map = {val:[User_BrandIDs2int[row] for row in val.split('|')]  \\\n",
    "                  for ii,val in enumerate(set(test_common_features['User_BrandIDs']))}\n",
    "    # merge train & test\n",
    "    User_BrandIDs_map.update(test_User_BrandIDs_map)\n",
    "    User_BrandIDs_map_max_len = 100\n",
    "    print(\"User_BrandIDs_map max_len:\", User_BrandIDs_map_max_len)\n",
    "    for key in User_BrandIDs_map:\n",
    "        for cnt in range(User_BrandIDs_map_max_len - len(User_BrandIDs_map[key])):\n",
    "            User_BrandIDs_map[key].insert(len(User_BrandIDs_map[key]) + cnt,User_BrandIDs2int['<PAD>'])\n",
    "    train_common_features['User_BrandIDs'] = train_common_features['User_BrandIDs'].map(User_BrandIDs_map)\n",
    "    test_common_features['User_BrandIDs'] = test_common_features['User_BrandIDs'].map(User_BrandIDs_map)\n",
    "    print(\"User_BrandIDs finish\")\n",
    "    \n",
    "    \n",
    "    #userID 转数字字典\n",
    "    UserID_set = set()\n",
    "    for val in train_common_features['UserID']:\n",
    "        UserID_set.add(val)\n",
    "    for val in test_common_features['UserID']:\n",
    "        UserID_set.add(val)\n",
    "    UserID2int = {val:ii for ii, val in enumerate(UserID_set)}\n",
    "    UserID_map_max_len = 1\n",
    "    print(\"UserID_map max_len:\", UserID_map_max_len)\n",
    "    train_common_features['UserID'] = train_common_features['UserID'].map(UserID2int)\n",
    "    test_common_features['UserID'] = test_common_features['UserID'].map(UserID2int)\n",
    "    print(\"UserID finish\")\n",
    "    \n",
    "    #User_Cluster 转数字字典\n",
    "    User_Cluster_set = set()\n",
    "    for val in train_common_features['User_Cluster']:\n",
    "        User_Cluster_set.add(val)\n",
    "    for val in test_common_features['User_Cluster']:\n",
    "        User_Cluster_set.add(val)\n",
    "    User_Cluster2int = {val:ii for ii, val in enumerate(User_Cluster_set)}\n",
    "    User_Cluster_map_max_len = 1\n",
    "    print(\"User_Cluster_map max_len:\", User_Cluster_map_max_len)\n",
    "    train_common_features['User_Cluster'] = train_common_features['User_Cluster'].map(User_Cluster2int)\n",
    "    test_common_features['User_Cluster'] = test_common_features['User_Cluster'].map(User_Cluster2int)\n",
    "    print(\"User_Cluster finish\")\n",
    "    \n",
    "    #CategoryID 转数字字典\n",
    "    CategoryID_set = set()\n",
    "    for val in train_sample_table['CategoryID']:\n",
    "        CategoryID_set.add(val)\n",
    "    for val in test_sample_table['CategoryID']:\n",
    "        CategoryID_set.add(val)\n",
    "    CategoryID2int = {val:ii for ii, val in enumerate(CategoryID_set)}\n",
    "    CategoryID_map_max_len = 1\n",
    "    print(\"CategoryID_map max_len:\", CategoryID_map_max_len)\n",
    "    train_sample_table['CategoryID'] = train_sample_table['CategoryID'].map(CategoryID2int)\n",
    "    test_sample_table['CategoryID'] = test_sample_table['CategoryID'].map(CategoryID2int)\n",
    "    print(\"CategoryID finish\")\n",
    "    \n",
    "    #ShopID 转数字字典\n",
    "    ShopID_set = set()\n",
    "    for val in train_sample_table['ShopID']:\n",
    "        ShopID_set.add(val)\n",
    "    for val in test_sample_table['ShopID']:\n",
    "        ShopID_set.add(val)\n",
    "    ShopID2int = {val:ii for ii, val in enumerate(ShopID_set)}\n",
    "    ShopID_map_max_len = 1\n",
    "    print(\"ShopID_map max_len:\", ShopID_map_max_len)\n",
    "    train_sample_table['ShopID'] = train_sample_table['ShopID'].map(ShopID2int)\n",
    "    test_sample_table['ShopID'] = test_sample_table['ShopID'].map(ShopID2int)\n",
    "    print(\"ShopID finish\")\n",
    "\n",
    "    #BrandID 转数字字典\n",
    "    BrandID_set = set()\n",
    "    for val in train_sample_table['BrandID']:\n",
    "        BrandID_set.add(val)\n",
    "    for val in test_sample_table['BrandID']:\n",
    "        BrandID_set.add(val)\n",
    "    BrandID2int = {val:ii for ii, val in enumerate(BrandID_set)}\n",
    "    BrandID_map_max_len = 1\n",
    "    print(\"BrandID_map max_len:\", UserID_map_max_len)\n",
    "    train_sample_table['BrandID'] = train_sample_table['BrandID'].map(BrandID2int)\n",
    "    test_sample_table['BrandID'] = test_sample_table['BrandID'].map(BrandID2int)\n",
    "    print(\"BrandID finish\")\n",
    "    \n",
    "    #Com_CateID 转数字字典\n",
    "    Com_CateID_set = set()\n",
    "    for val in train_sample_table['Com_CateID']:\n",
    "        Com_CateID_set.add(val)\n",
    "    for val in test_sample_table['Com_CateID']:\n",
    "        Com_CateID_set.add(val)\n",
    "    Com_CateID2int = {val:ii for ii, val in enumerate(Com_CateID_set)}\n",
    "    Com_CateID_map_max_len = 1\n",
    "    print(\"Com_CateID_map max_len:\", Com_CateID_map_max_len)\n",
    "    train_sample_table['Com_CateID'] = train_sample_table['Com_CateID'].map(Com_CateID2int)\n",
    "    test_sample_table['Com_CateID'] = test_sample_table['Com_CateID'].map(Com_CateID2int)\n",
    "    print(\"Com_CateID finish\")\n",
    "    \n",
    "    #Com_ShopID 转数字字典\n",
    "    Com_ShopID_set = set()\n",
    "    for val in train_sample_table['Com_ShopID']:\n",
    "        Com_ShopID_set.add(val)\n",
    "    for val in test_sample_table['Com_ShopID']:\n",
    "        Com_ShopID_set.add(val)\n",
    "    Com_ShopID2int = {val:ii for ii, val in enumerate(Com_ShopID_set)}\n",
    "    Com_ShopID_map_max_len = 1\n",
    "    print(\"Com_ShopID_map max_len:\", Com_ShopID_map_max_len)\n",
    "    train_sample_table['Com_ShopID'] = train_sample_table['Com_ShopID'].map(Com_ShopID2int)\n",
    "    test_sample_table['Com_ShopID'] = test_sample_table['Com_ShopID'].map(Com_ShopID2int)\n",
    "    print(\"Com_ShopID finish\")\n",
    "    \n",
    "    #Com_BrandID 转数字字典\n",
    "    Com_BrandID_set = set()\n",
    "    for val in train_sample_table['Com_BrandID']:\n",
    "        Com_BrandID_set.add(val)\n",
    "    for val in test_sample_table['Com_BrandID']:\n",
    "        Com_BrandID_set.add(val)\n",
    "    Com_BrandID2int = {val:ii for ii, val in enumerate(Com_BrandID_set)}\n",
    "    Com_BrandID_map_max_len = 1\n",
    "    print(\"Com_BrandID_map max_len:\", UserID_map_max_len)\n",
    "    train_sample_table['Com_BrandID'] = train_sample_table['Com_BrandID'].map(Com_BrandID2int)\n",
    "    test_sample_table['Com_BrandID'] = test_sample_table['Com_BrandID'].map(Com_BrandID2int)\n",
    "    print(\"Com_BrandID finish\")\n",
    "    \n",
    "    #PID 转数字字典\n",
    "    PID_set = set()\n",
    "    for val in train_sample_table['PID']:\n",
    "        PID_set.add(val)\n",
    "    for val in test_sample_table['PID']:\n",
    "        PID_set.add(val)\n",
    "    PID2int = {val:ii for ii, val in enumerate(PID_set)}\n",
    "    PID_map_max_len = 1\n",
    "    print(\"PID_map max_len:\", PID_map_max_len)\n",
    "    train_sample_table['PID'] = train_sample_table['PID'].map(PID2int)\n",
    "    test_sample_table['PID'] = test_sample_table['PID'].map(PID2int)\n",
    "    print(\"PID finish\")\n",
    "    \n",
    "    \n",
    "    #按照md5合并两个表\n",
    "    train_data = pd.merge(train_sample_table, train_common_features, on='md5',how='inner')\n",
    "    test_data = pd.merge(test_sample_table, test_common_features, on='md5',how='inner')\n",
    "\n",
    "    print(\"Sample/Common Merged\")\n",
    "    #将数据分成X和y两张表\n",
    "    feature_fields = ['UserID','ItemID','User_Cluster', 'CategoryID','ShopID',\\\n",
    "                      'BrandID','Com_CateID','Com_ShopID','Com_BrandID','PID','User_CateIDs','User_BrandIDs']\n",
    "    target_fields = ['click','buy']\n",
    "    train_features_pd, train_targets_pd = train_data[feature_fields], train_data[target_fields]\n",
    "    train_features = train_features_pd.values\n",
    "    train_targets_values = train_targets_pd.values\n",
    "    \n",
    "    test_features_pd, test_targets_pd = test_data[feature_fields], test_data[target_fields]\n",
    "    test_features = test_features_pd.values\n",
    "    test_targets_values = test_targets_pd.values\n",
    "    \n",
    "    return UserID_map_max_len, ItemID_map_max_len, User_Cluster_map_max_len, \\\n",
    "User_CateIDs_map_max_len, User_BrandIDs_map_max_len, \\\n",
    "CategoryID_map_max_len, ShopID_map_max_len, BrandID_map_max_len, Com_CateID_map_max_len,\\\n",
    "Com_ShopID_map_max_len, Com_BrandID_map_max_len, PID_map_max_len, UserID2int, ItemID2int,\\\n",
    "User_Cluster2int, User_CateIDs2int, User_BrandIDs2int,  CategoryID2int, ShopID2int, BrandID2int, Com_CateID2int, \\\n",
    "Com_ShopID2int, Com_BrandID2int, PID2int, train_features, train_targets_values, train_data, \\\n",
    "test_features, test_targets_values, test_data\n",
    "          "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 加载数据并保存到本地"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "D:\\Anaconda3\\envs\\CtrPredictDL\\lib\\site-packages\\ipykernel_launcher.py:16: FutureWarning: read_table is deprecated, use read_csv instead.\n",
      "  app.launch_new_instance()\n",
      "D:\\Anaconda3\\envs\\CtrPredictDL\\lib\\site-packages\\ipykernel_launcher.py:17: FutureWarning: read_table is deprecated, use read_csv instead.\n",
      "D:\\Anaconda3\\envs\\CtrPredictDL\\lib\\site-packages\\ipykernel_launcher.py:21: FutureWarning: read_table is deprecated, use read_csv instead.\n",
      "D:\\Anaconda3\\envs\\CtrPredictDL\\lib\\site-packages\\ipykernel_launcher.py:22: FutureWarning: read_table is deprecated, use read_csv instead.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "ItemID_map max_len: 1\n",
      "ItemID finish\n",
      "User_CateIDs_map max_len: 100\n",
      "User_CateIDs finish\n",
      "User_BrandIDs_map max_len: 100\n",
      "User_BrandIDs finish\n",
      "UserID_map max_len: 1\n",
      "UserID finish\n",
      "User_Cluster_map max_len: 1\n",
      "User_Cluster finish\n",
      "CategoryID_map max_len: 1\n",
      "CategoryID finish\n",
      "ShopID_map max_len: 1\n",
      "ShopID finish\n",
      "BrandID_map max_len: 1\n",
      "BrandID finish\n",
      "Com_CateID_map max_len: 1\n",
      "Com_CateID finish\n",
      "Com_ShopID_map max_len: 1\n",
      "Com_ShopID finish\n",
      "Com_BrandID_map max_len: 1\n",
      "Com_BrandID finish\n",
      "PID_map max_len: 1\n",
      "PID finish\n",
      "Sample/Common Merged\n",
      "0\n",
      "0\n"
     ]
    }
   ],
   "source": [
    "UserID_map_max_len, ItemID_map_max_len, User_Cluster_map_max_len, \\\n",
    "User_CateIDs_map_max_len, User_BrandIDs_map_max_len, \\\n",
    "CategoryID_map_max_len, ShopID_map_max_len, BrandID_map_max_len, Com_CateID_map_max_len,\\\n",
    "Com_ShopID_map_max_len, Com_BrandID_map_max_len, PID_map_max_len, UserID2int, ItemID2int,\\\n",
    "User_Cluster2int, User_CateIDs2int, User_BrandIDs2int,  CategoryID2int, ShopID2int, BrandID2int, Com_CateID2int, \\\n",
    "Com_ShopID2int, Com_BrandID2int, PID2int, train_features, train_targets_values, train_data, \\\n",
    "test_features, test_targets_values, test_data = load_ESMM_Train_and_Test_Data()\n",
    "print(0)\n",
    "pickle.dump((UserID_map_max_len, ItemID_map_max_len, User_Cluster_map_max_len, \\\n",
    "User_CateIDs_map_max_len, User_BrandIDs_map_max_len, \\\n",
    "CategoryID_map_max_len, ShopID_map_max_len, BrandID_map_max_len, Com_CateID_map_max_len,\\\n",
    "Com_ShopID_map_max_len, Com_BrandID_map_max_len, PID_map_max_len, UserID2int, ItemID2int,\\\n",
    "User_Cluster2int, User_CateIDs2int, User_BrandIDs2int,  CategoryID2int, ShopID2int, BrandID2int, Com_CateID2int, \\\n",
    "Com_ShopID2int, Com_BrandID2int, PID2int, train_features, train_targets_values, train_data, \\\n",
    "test_features, test_targets_values, test_data), open('./save/preprocess.p', 'wb'))\n",
    "print(0)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "313949"
      ]
     },
     "execution_count": 29,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "User_CateIDs2int['<PAD>']\n",
    "User_BrandIDs2int['<PAD>']"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[61987,\n",
       " 237534,\n",
       " 273032,\n",
       " 237000,\n",
       " 312547,\n",
       " 20614,\n",
       " 41041,\n",
       " 203423,\n",
       " 230609,\n",
       " 33934,\n",
       " 210320,\n",
       " 289299,\n",
       " 277248,\n",
       " 174912,\n",
       " 42259,\n",
       " 319610,\n",
       " 342852,\n",
       " 195747,\n",
       " 135368,\n",
       " 49560,\n",
       " 236878,\n",
       " 70608,\n",
       " 15384,\n",
       " 213226,\n",
       " 113704,\n",
       " 18407,\n",
       " 218303,\n",
       " 12202,\n",
       " 195198,\n",
       " 41792,\n",
       " 54187,\n",
       " 105309,\n",
       " 276617,\n",
       " 146648,\n",
       " 263711,\n",
       " 79472,\n",
       " 125298,\n",
       " 238694,\n",
       " 339133,\n",
       " 121171,\n",
       " 210057,\n",
       " 43155,\n",
       " 114212,\n",
       " 66315,\n",
       " 81082,\n",
       " 230752,\n",
       " 314431,\n",
       " 61442,\n",
       " 307621,\n",
       " 32590,\n",
       " 217586,\n",
       " 301784,\n",
       " 149644,\n",
       " 228381,\n",
       " 174036,\n",
       " 220281,\n",
       " 103793,\n",
       " 225890,\n",
       " 156137,\n",
       " 144150,\n",
       " 185943,\n",
       " 337820,\n",
       " 71459,\n",
       " 189357,\n",
       " 127224,\n",
       " 140179,\n",
       " 65016,\n",
       " 264265,\n",
       " 143374,\n",
       " 244497,\n",
       " 206483,\n",
       " 205776,\n",
       " 48328,\n",
       " 226804,\n",
       " 267679,\n",
       " 265698,\n",
       " 35686,\n",
       " 228060,\n",
       " 113666,\n",
       " 127177,\n",
       " 138981,\n",
       " 69210,\n",
       " 173545,\n",
       " 162930,\n",
       " 116981,\n",
       " 1206,\n",
       " 308249,\n",
       " 195032,\n",
       " 47239,\n",
       " 252009,\n",
       " 57424,\n",
       " 217869,\n",
       " 344372,\n",
       " 298787,\n",
       " 219896,\n",
       " 119063,\n",
       " 51316,\n",
       " 21932,\n",
       " 171513,\n",
       " 256927]"
      ]
     },
     "execution_count": 30,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "test_features[0:1,0:100]\n",
    "train_features[0:1,0:100]\n",
    "train_features.take(11,1)[1000]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "(207104, 2)\n",
      "(1085721, 2)\n"
     ]
    }
   ],
   "source": [
    "\n",
    "test_targets_values[0:10]\n",
    "print(train_targets_values.shape)\n",
    "print(test_targets_values.shape)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[[ 2135. 10582.  5686.  3169.  6283.  6538. 11236.  1962. 10714.  9616.\n",
      "   4916.  4251. 11842.  4146.  3812.  8662.  7494.   323.  5512.  4410.\n",
      "   6467.  8205.   912.  7345.  4122.  7140.  2677.   487.  6452.  2740.\n",
      "   5123.   297.  9410.    61.  3834.  9847.  9633. 10418.  5093.  2585.\n",
      "   2893.  3562.  5963. 11417. 10155.  5995.  6836.  2854.  7370.  7120.\n",
      "   1870. 10785. 10794.  1882.  8462.  3457.  1366.  5051. 11083.  1723.\n",
      "  10280.   717.  7821. 10443.  9212.  8504.  2890. 11084.   312.  5816.\n",
      "   1748.  9893.  9043.  7411.  2634.  2214.  4988.  2209.  2639.  8903.\n",
      "   5677.  4022.  8752. 10174.  6441. 10565.  6216. 11407. 11659.  7817.\n",
      "  11601.  9567.  7047. 10416.  5109.  5108.  4693.  9343.  9449. 10359.]\n",
      " [ 2135. 10582.  5686.  3169.  6283.  6538. 11236.  1962. 10714.  9616.\n",
      "   4916.  4251. 11842.  4146.  3812.  8662.  7494.   323.  5512.  4410.\n",
      "   6467.  8205.   912.  7345.  4122.  7140.  2677.   487.  6452.  2740.\n",
      "   5123.   297.  9410.    61.  3834.  9847.  9633. 10418.  5093.  2585.\n",
      "   2893.  3562.  5963. 11417. 10155.  5995.  6836.  2854.  7370.  7120.\n",
      "   1870. 10785. 10794.  1882.  8462.  3457.  1366.  5051. 11083.  1723.\n",
      "  10280.   717.  7821. 10443.  9212.  8504.  2890. 11084.   312.  5816.\n",
      "   1748.  9893.  9043.  7411.  2634.  2214.  4988.  2209.  2639.  8903.\n",
      "   5677.  4022.  8752. 10174.  6441. 10565.  6216. 11407. 11659.  7817.\n",
      "  11601.  9567.  7047. 10416.  5109.  5108.  4693.  9343.  9449. 10359.]]\n"
     ]
    }
   ],
   "source": [
    "user_cateids = np.zeros([2, 100])\n",
    "for i in range(2):\n",
    "    user_cateids[i] = train_features.take(10,1)[i]\n",
    "print(user_cateids)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 从本地读取数据"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0\n"
     ]
    }
   ],
   "source": [
    "UserID_map_max_len, ItemID_map_max_len, User_Cluster_map_max_len, \\\n",
    "User_CateIDs_map_max_len, User_BrandIDs_map_max_len, \\\n",
    "CategoryID_map_max_len, ShopID_map_max_len, BrandID_map_max_len, Com_CateID_map_max_len,\\\n",
    "Com_ShopID_map_max_len, Com_BrandID_map_max_len, PID_map_max_len, UserID2int, ItemID2int,\\\n",
    "User_Cluster2int, User_CateIDs2int, User_BrandIDs2int,  CategoryID2int, ShopID2int, BrandID2int, Com_CateID2int, \\\n",
    "Com_ShopID2int, Com_BrandID2int, PID2int, train_features, train_targets_values, train_data, \\\n",
    "test_features, test_targets_values, test_data = pickle.load(open('./save/preprocess.p', mode='rb'))\n",
    "print(0)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 模型设计"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 模型架构\n",
    "\n",
    "<img src=\"assets/esmm.png\"/>\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Embedding Lookup 示例"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "WARNING:tensorflow:From /anaconda3/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/util/tf_should_use.py:118: initialize_all_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.\n",
      "Instructions for updating:\n",
      "Use `tf.global_variables_initializer` instead.\n",
      "[[0.83833286]]\n",
      "[0.83833286]\n",
      "[[0.03005998]\n",
      " [0.83833286]\n",
      " [0.18680768]\n",
      " [0.78803244]\n",
      " [0.33242076]\n",
      " [0.27561619]\n",
      " [0.64413792]\n",
      " [0.46366025]\n",
      " [0.66190703]\n",
      " [0.30617731]]\n"
     ]
    }
   ],
   "source": [
    "import tensorflow as tf;\n",
    "import numpy as np;\n",
    " \n",
    "c = np.random.random([10,1])\n",
    "b = tf.nn.embedding_lookup(c, [1])\n",
    "a = tf.nn.embedding_lookup(c, 1)\n",
    "with tf.Session() as sess:\n",
    "    sess.run(tf.initialize_all_variables())\n",
    "    print(sess.run(b))\n",
    "    print(sess.run(a))\n",
    "    print(c)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Tensorflow slice 示例"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "(3, 2)\n",
      "(3, 1)\n",
      "[[100 -99]\n",
      " [ 99 -98]\n",
      " [ 98 -97]]\n"
     ]
    }
   ],
   "source": [
    "t = tf.constant([[1,100],[2,99],[3,98]])\n",
    "t1 = tf.slice(t, [0,1], [-1, 1])  # [[100, 99, 98]]\n",
    "t2 =  1-t1 # [[-99, -98, -97]]\n",
    "t3 = tf.concat([t1,t2],axis=1)\n",
    "print(t.shape)\n",
    "print(t1.shape)\n",
    "with tf.Session() as sess:\n",
    "    sess.run(tf.initialize_all_variables())\n",
    "    #print(sess.run(t1))\n",
    "    #print(sess.run(t2))\n",
    "    print(sess.run(t3))\n",
    "    "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 训练测试集Split 示例"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[[0 1]\n",
      " [2 3]\n",
      " [4 5]\n",
      " [6 7]\n",
      " [8 9]]\n",
      "range(0, 5)\n",
      "[[2 3]\n",
      " [8 9]]\n",
      "[1, 4]\n"
     ]
    }
   ],
   "source": [
    "## 计算AUC\n",
    "import numpy as np\n",
    "from sklearn.model_selection import train_test_split\n",
    "X, y = np.arange(10).reshape((5, 2)), range(5)\n",
    "#X\n",
    "#list(y)\n",
    "X_train, X_test, y_train, y_test = train_test_split(\n",
    "    X, y, test_size=0.33, random_state=42)\n",
    "# X_train\n",
    "# y_train\n",
    "print(X)\n",
    "print(y)\n",
    "print(X_test)\n",
    "print(y_test)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### AUC计算"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 34,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# 整个Batch AUC计算，适合CTR、CTCVR\n",
    "def calc_auc(raw_arr):\n",
    "    \n",
    "    # sort by pred value, from small to big\n",
    "    arr = sorted(raw_arr, key=lambda d:d[2])\n",
    "\n",
    "    auc = 0.0\n",
    "    fp1, tp1, fp2, tp2 = 0.0, 0.0, 0.0, 0.0\n",
    "    for record in arr:\n",
    "        fp2 += record[0] # noclick\n",
    "        tp2 += record[1] # click\n",
    "        auc += (fp2 - fp1) * (tp2 + tp1)\n",
    "        fp1, tp1 = fp2, tp2\n",
    "\n",
    "    # if all nonclick or click, disgard\n",
    "    threshold = len(arr) - 1e-3\n",
    "    if tp2 > threshold or fp2 > threshold:\n",
    "        return -0.5\n",
    "\n",
    "    if tp2 * fp2 > 0.0:  # normal auc\n",
    "        return (1.0 - auc / (2.0 * tp2 * fp2))\n",
    "    else:\n",
    "        return None\n",
    "\n",
    "### AUC 带Filter计算（CVR AUC只需要计算Click=1的样本子集）\n",
    "def calc_auc_with_filter(raw_arr, filter_arr):\n",
    "    ## get filter array row indexes\n",
    "    filter_index = np.nonzero(filter_arr)[0].tolist()\n",
    "    input_arr = [raw_arr[index] for index in filter_index]\n",
    "    auc_val = calc_auc(input_arr)\n",
    "    return auc_val\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 构建神经网络"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 定义输入\n",
    "定义输入的占位符"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 72,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def get_inputs():\n",
    "    UserID = tf.placeholder(tf.int32, [None, 1], name=\"UserID\")\n",
    "    ItemID = tf.placeholder(tf.int32, [None, 1], name=\"ItemID\")\n",
    "    User_Cluster = tf.placeholder(tf.int32, [None, 1], name=\"User_Cluster\")\n",
    "    User_CateIDs = tf.placeholder(tf.int32, [None, 100], name=\"User_CateIDs\")\n",
    "    User_BrandIDs = tf.placeholder(tf.int32, [None, 100], name=\"User_BrandIDs\")\n",
    "    \n",
    "    CategoryID = tf.placeholder(tf.int32, [None, 1], name=\"CategoryID\")\n",
    "    ShopID = tf.placeholder(tf.int32, [None, 1], name=\"ShopID\")\n",
    "    BrandID = tf.placeholder(tf.int32, [None, 1], name=\"BrandID\")\n",
    "    Com_CateID = tf.placeholder(tf.int32, [None, 1], name=\"Com_CateID\")\n",
    "    Com_ShopID = tf.placeholder(tf.int32, [None, 1], name=\"Com_ShopID\")\n",
    "    Com_BrandID = tf.placeholder(tf.int32, [None, 1], name=\"Com_BrandID\")\n",
    "    PID = tf.placeholder(tf.int32, [None, 1], name=\"PID\")\n",
    "    \n",
    "    targets = tf.placeholder(tf.float32, [None, 2], name=\"targets\")\n",
    "    LearningRate = tf.placeholder(tf.float32, name = \"LearningRate\")\n",
    "    return  UserID, ItemID, User_Cluster, CategoryID, ShopID, BrandID, Com_CateID,\\\n",
    "            Com_ShopID, Com_BrandID, PID, User_CateIDs, User_BrandIDs, targets, LearningRate"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 特征MaxID计算\n",
    "方便Embedding初始化"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 73,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "318046 687743 98 12132 368349 6795 281602 109624 5786 125182 56079 3\n"
     ]
    }
   ],
   "source": [
    "#嵌入矩阵的维度\n",
    "embed_dim = 12\n",
    "#userID个数\n",
    "UserID_max = max(UserID2int.values()) + 1 \n",
    "#itemID个数\n",
    "ItemID_max = max(ItemID2int.values()) + 1 \n",
    "User_Cluster_max = max(User_Cluster2int.values()) + 1 \n",
    "User_CateIDs_max = max(User_CateIDs2int.values()) + 1 \n",
    "User_BrandIDs_max = max(User_BrandIDs2int.values()) + 1 \n",
    "\n",
    "CategoryID_max = max(CategoryID2int.values()) + 1 \n",
    "ShopID_max = max(ShopID2int.values()) + 1 \n",
    "BrandID_max = max(BrandID2int.values()) + 1 \n",
    "Com_CateID_max = max(Com_CateID2int.values()) + 1 \n",
    "Com_ShopID_max = max(Com_ShopID2int.values()) + 1 \n",
    "Com_BrandID_max = max(Com_BrandID2int.values()) + 1 \n",
    "PID_max = max(PID2int.values()) + 1 \n",
    "\n",
    "#变长特征pooling方式\n",
    "combiner = \"sum\"\n",
    "\n",
    "print(UserID_max, ItemID_max, User_Cluster_max, User_CateIDs_max, User_BrandIDs_max, CategoryID_max, ShopID_max, BrandID_max, \\\n",
    "      Com_CateID_max, Com_ShopID_max, Com_BrandID_max, PID_max)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 对所有输入做Embedding"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 74,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "\n",
    "def define_embedding_layers(UserID, ItemID, User_Cluster, CategoryID, ShopID, BrandID, Com_CateID,\\\n",
    "            Com_ShopID, Com_BrandID, PID, User_CateIDs, User_BrandIDs):\n",
    "\n",
    "    UserID_embed_matrix = tf.Variable(tf.random_normal([UserID_max, embed_dim], 0, 0.001))\n",
    "    UserID_embed_layer = tf.nn.embedding_lookup(UserID_embed_matrix, UserID)\n",
    "    if combiner == \"sum\":\n",
    "        UserID_embed_layer = tf.reduce_sum(UserID_embed_layer, axis=1, keep_dims=True)\n",
    "        \n",
    "    ItemID_embed_matrix = tf.Variable(tf.random_uniform([ItemID_max, embed_dim], 0, 0.001))\n",
    "    ItemID_embed_layer = tf.nn.embedding_lookup(ItemID_embed_matrix, ItemID)\n",
    "    if combiner == \"sum\":\n",
    "        ItemID_embed_layer = tf.reduce_sum(ItemID_embed_layer, axis=1, keep_dims=True)\n",
    "\n",
    "    User_Cluster_embed_matrix = tf.Variable(tf.random_uniform([User_Cluster_max, embed_dim], 0, 0.001))\n",
    "    User_Cluster_embed_layer = tf.nn.embedding_lookup(User_Cluster_embed_matrix, User_Cluster)\n",
    "    if combiner == \"sum\":\n",
    "        User_Cluster_embed_layer = tf.reduce_sum(User_Cluster_embed_layer, axis=1, keep_dims=True)\n",
    "        \n",
    "    User_CateIDs_embed_matrix = tf.Variable(tf.random_uniform([User_CateIDs_max, embed_dim], 0, 0.001))\n",
    "    User_CateIDs_embed_layer = tf.nn.embedding_lookup(User_CateIDs_embed_matrix, User_CateIDs)\n",
    "    if combiner == \"sum\":\n",
    "        User_CateIDs_embed_layer = tf.reduce_sum(User_CateIDs_embed_layer, axis=1, keep_dims=True)\n",
    "        \n",
    "    User_BrandIDs_embed_matrix = tf.Variable(tf.random_uniform([User_BrandIDs_max, embed_dim], 0, 0.001))\n",
    "    User_BrandIDs_embed_layer = tf.nn.embedding_lookup(User_BrandIDs_embed_matrix, User_BrandIDs)\n",
    "    if combiner == \"sum\":\n",
    "        User_BrandIDs_embed_layer = tf.reduce_sum(User_BrandIDs_embed_layer, axis=1, keep_dims=True)\n",
    "        \n",
    "    CategoryID_embed_matrix = tf.Variable(tf.random_uniform([CategoryID_max, embed_dim], 0, 0.001))\n",
    "    CategoryID_embed_layer = tf.nn.embedding_lookup(CategoryID_embed_matrix, CategoryID)\n",
    "    if combiner == \"sum\":\n",
    "        CategoryID_embed_layer = tf.reduce_sum(CategoryID_embed_layer, axis=1, keep_dims=True)\n",
    "    \n",
    "    ShopID_embed_matrix = tf.Variable(tf.random_uniform([ShopID_max, embed_dim], 0, 0.001))\n",
    "    ShopID_embed_layer = tf.nn.embedding_lookup(ShopID_embed_matrix, ShopID)\n",
    "    if combiner == \"sum\":\n",
    "        ShopID_embed_layer = tf.reduce_sum(ShopID_embed_layer, axis=1, keep_dims=True)\n",
    "\n",
    "    BrandID_embed_matrix = tf.Variable(tf.random_uniform([BrandID_max, embed_dim], 0, 0.001))\n",
    "    BrandID_embed_layer = tf.nn.embedding_lookup(BrandID_embed_matrix, BrandID)\n",
    "    if combiner == \"sum\":\n",
    "        BrandID_embed_layer = tf.reduce_sum(BrandID_embed_layer, axis=1, keep_dims=True)\n",
    "\n",
    "    Com_CateID_embed_matrix = tf.Variable(tf.random_uniform([Com_CateID_max, embed_dim], 0, 0.001))\n",
    "    Com_CateID_embed_layer = tf.nn.embedding_lookup(Com_CateID_embed_matrix, Com_CateID)\n",
    "    if combiner == \"sum\":\n",
    "        Com_CateID_embed_layer = tf.reduce_sum(Com_CateID_embed_layer, axis=1, keep_dims=True)\n",
    "\n",
    "    Com_ShopID_embed_matrix = tf.Variable(tf.random_uniform([Com_ShopID_max, embed_dim], 0, 0.001))\n",
    "    Com_ShopID_embed_layer = tf.nn.embedding_lookup(Com_ShopID_embed_matrix, Com_ShopID)\n",
    "    if combiner == \"sum\":\n",
    "        Com_ShopID_embed_layer = tf.reduce_sum(Com_ShopID_embed_layer, axis=1, keep_dims=True)\n",
    "\n",
    "    Com_BrandID_embed_matrix = tf.Variable(tf.random_uniform([Com_BrandID_max, embed_dim], 0, 0.001))\n",
    "    Com_BrandID_embed_layer = tf.nn.embedding_lookup(Com_BrandID_embed_matrix, Com_BrandID)\n",
    "    if combiner == \"sum\":\n",
    "        Com_BrandID_embed_layer = tf.reduce_sum(Com_BrandID_embed_layer, axis=1, keep_dims=True)\n",
    "\n",
    "\n",
    "    PID_embed_matrix = tf.Variable(tf.random_uniform([PID_max, embed_dim], 0, 0.001))\n",
    "    PID_embed_layer = tf.nn.embedding_lookup(PID_embed_matrix, PID)\n",
    "    if combiner == \"sum\":\n",
    "        PID_embed_layer = tf.reduce_sum(PID_embed_layer, axis=1, keep_dims=True)\n",
    "    '''    \n",
    "    esmm_embedding_layer = tf.concat([UserID_embed_layer, ItemID_embed_layer, User_Cluster_embed_layer,\\\n",
    "                                     CategoryID_embed_layer, ShopID_embed_layer, BrandID_embed_layer,\\\n",
    "                                     Com_CateID_embed_layer, Com_ShopID_embed_layer, Com_BrandID_embed_layer,\\\n",
    "                                      PID_embed_layer,], 2)\n",
    "    esmm_embedding_layer = tf.reshape(esmm_embedding_layer, [-1, embed_dim * 10])\n",
    "    '''\n",
    "    '''\n",
    "    # 数据量较小，选择UID特征和其他一些低维度特征\n",
    "    esmm_embedding_layer = tf.concat([UserID_embed_layer,User_Cluster_embed_layer,\\\n",
    "                                     CategoryID_embed_layer,\\\n",
    "                                     Com_CateID_embed_layer,\\\n",
    "                                      PID_embed_layer,], 2)\n",
    "    esmm_embedding_layer = tf.reshape(esmm_embedding_layer, [-1, embed_dim * 5])\n",
    "    '''\n",
    "    # 数据量较小，选择User Cluster and 其他一些较低低维度特征\n",
    "    esmm_embedding_layer = tf.concat([User_CateIDs_embed_layer,\\\n",
    "                                      User_BrandIDs_embed_layer,\\\n",
    "                                      ItemID_embed_layer,\\\n",
    "                                     CategoryID_embed_layer,\\\n",
    "                                     Com_CateID_embed_layer,\\\n",
    "                                      PID_embed_layer,], 2)\n",
    "    esmm_embedding_layer = tf.reshape(esmm_embedding_layer, [-1, embed_dim * 6])\n",
    "    return esmm_embedding_layer"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 75,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def define_ctr_layer(esmm_embedding_layer):\n",
    "    ctr_layer_1 = tf.layers.dense(esmm_embedding_layer, 200, activation=tf.nn.relu)\n",
    "    ctr_layer_2 = tf.layers.dense(ctr_layer_1, 80, activation=tf.nn.relu)\n",
    "    ctr_layer_3 = tf.layers.dense(ctr_layer_2, 2) # [nonclick, click]\n",
    "    ctr_prob = tf.nn.softmax(ctr_layer_3) + 0.00000001\n",
    "    return ctr_prob"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 76,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def define_cvr_layer(esmm_embedding_layer):\n",
    "    cvr_layer_1 = tf.layers.dense(esmm_embedding_layer, 200, activation=tf.nn.relu)\n",
    "    cvr_layer_2 = tf.layers.dense(cvr_layer_1, 80, activation=tf.nn.relu)\n",
    "    cvr_layer_3 = tf.layers.dense(cvr_layer_2, 2) # [nonbuy, buy]\n",
    "    cvr_prob = tf.nn.softmax(cvr_layer_3) + 0.00000001\n",
    "    return cvr_prob"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 77,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "#  由于demo数据过小，购买过于稀疏，设计cvr和ctr倒数第二层以前完全共享\n",
    "def define_ctr_cvr_layer(esmm_embedding_layer):\n",
    "    layer_1 = tf.layers.dense(esmm_embedding_layer, 128 , activation=tf.nn.relu)\n",
    "    layer_2 = tf.layers.dense(layer_1, 16, activation=tf.nn.relu)\n",
    "    layer_3 = tf.layers.dense(layer_2, 2)\n",
    "    ctr_prob = tf.nn.softmax(layer_3) + 0.00000001\n",
    "    layer_4 = tf.layers.dense(layer_2, 2)\n",
    "    cvr_prob = tf.nn.softmax(layer_4) + 0.00000001\n",
    "    return ctr_prob, cvr_prob"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 构建计算图"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 78,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "tf.reset_default_graph()\n",
    "train_graph = tf.Graph()\n",
    "with train_graph.as_default():\n",
    "    #获取输入占位符\n",
    "    UserID, ItemID, User_Cluster, CategoryID, ShopID, BrandID, Com_CateID,\\\n",
    "            Com_ShopID, Com_BrandID, PID, User_CateIDs, User_BrandIDs, targets, lr = get_inputs()\n",
    "\n",
    "    # Embedding Input Layer\n",
    "    esmm_embedding_layer = define_embedding_layers(UserID, ItemID, User_Cluster, CategoryID, ShopID, BrandID, Com_CateID,\\\n",
    "            Com_ShopID, Com_BrandID, PID, User_CateIDs, User_BrandIDs)\n",
    "\n",
    "    # CTR Network\n",
    "    #ctr_prob = define_ctr_layer(esmm_embedding_layer)\n",
    "    \n",
    "    # CVR Network\n",
    "    #cvr_prob = define_cvr_layer(esmm_embedding_layer)\n",
    "    \n",
    "    # 由于demo数据过小，购买过于稀疏，设计cvr和ctr倒数第二层以前完全共享\n",
    "    ctr_prob, cvr_prob = define_ctr_cvr_layer(esmm_embedding_layer)\n",
    "    \n",
    "    with tf.name_scope(\"loss\"):\n",
    "        \n",
    "        ctr_prob_one = tf.slice(ctr_prob, [0,1], [-1, 1]) # [batch_size , 1]\n",
    "        cvr_prob_one = tf.slice(cvr_prob, [0,1], [-1, 1]) # [batchsize, 1 ]\n",
    "        \n",
    "        ctcvr_prob_one = ctr_prob_one * cvr_prob_one # [ctr*cvr]\n",
    "        ctcvr_prob = tf.concat([1 - ctcvr_prob_one, ctcvr_prob_one], axis=1)\n",
    "        \n",
    "        ctr_label =  tf.slice(targets, [0,0], [-1, 1]) # target: [click, buy]\n",
    "        ctr_label = tf.concat([1 - ctr_label, ctr_label], axis=1) # [1-click, click]\n",
    "\n",
    "        cvr_label = tf.slice(targets, [0,1], [-1, 1])\n",
    "        ctcvr_label = tf.concat([1 - cvr_label, cvr_label], axis=1)\n",
    "        \n",
    "        # 单列，判断Click是否=1\n",
    "        ctr_clk = tf.slice(targets, [0,0], [-1, 1])\n",
    "        ctr_clk_dup = tf.concat([ctr_clk, ctr_clk], axis=1)\n",
    "        \n",
    "        # clicked subset CVR loss\n",
    "        cvr_loss = - tf.multiply(tf.log(cvr_prob) * ctcvr_label, ctr_clk_dup)\n",
    "        # batch CTR loss\n",
    "        ctr_loss = - tf.log(ctr_prob) * ctr_label # -y*log(p)-(1-y)*log(1-p)\n",
    "        # batch CTCVR loss\n",
    "        ctcvr_loss = - tf.log(ctcvr_prob) * ctcvr_label\n",
    "        \n",
    "        loss = tf.reduce_mean(ctr_loss + ctcvr_loss + cvr_loss)\n",
    "        #loss = tf.reduce_mean(ctr_loss + ctcvr_loss)\n",
    "        #loss = tf.reduce_mean(ctr_loss + cvr_loss)\n",
    "        #loss = tf.reduce_mean(cvr_loss)\n",
    "        ctr_loss = tf.reduce_mean(ctr_loss)\n",
    "        cvr_loss = tf.reduce_mean(cvr_loss)\n",
    "        ctcvr_loss = tf.reduce_mean(ctcvr_loss)\n",
    "\n",
    "    # 优化损失 \n",
    "    #train_op = tf.train.AdamOptimizer(lr).minimize(loss)  #cost\n",
    "    global_step = tf.Variable(0, name=\"global_step\", trainable=False)\n",
    "    optimizer = tf.train.AdamOptimizer(lr)\n",
    "    gradients = optimizer.compute_gradients(loss)  #cost\n",
    "    train_op = optimizer.apply_gradients(gradients, global_step=global_step)\n",
    "    "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 训练网络"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 取得batch"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 49,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def get_batches(Xs, ys, batch_size):\n",
    "    for start in range(0, len(Xs), batch_size):\n",
    "        end = min(start + batch_size, len(Xs))\n",
    "        yield Xs[start:end], ys[start:end]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 超参"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 54,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# Number of Epochs\n",
    "num_epochs = 1\n",
    "# Batch Size\n",
    "batch_size = 10000\n",
    "\n",
    "# Test Batch Size\n",
    "test_batch_size = 10000\n",
    "\n",
    "# Learning Rate\n",
    "learning_rate = 0.01\n",
    "# Show stats for every n number of batches\n",
    "show_every_n_batches = 10\n",
    "show_test_every_n_batches = 10\n",
    "\n",
    "save_dir = './save'"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 训练开始"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 79,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Writing to /Users/maxiao/Documents/workspace4py/TSRNN/esmm_public/runs/1544353154\n",
      "\n",
      "train batch click num: 532  buy num: 105\n",
      "train batch click num: 554  buy num: 102\n",
      "train batch click num: 588  buy num: 129\n",
      "train batch click num: 562  buy num: 125\n",
      "train batch click num: 584  buy num: 120\n",
      "train batch click num: 522  buy num: 104\n",
      "train batch click num: 573  buy num: 107\n",
      "train batch click num: 543  buy num: 106\n",
      "train batch click num: 563  buy num: 112\n",
      "train batch click num: 532  buy num: 94\n",
      "train batch click num: 590  buy num: 134\n",
      "11 11 11\n",
      "2018-12-09T19:12:29.492830: Epoch 0 Batch 10/106  train_loss=0.179 train_ctr_loss=0.123 train_cvr_loss=0.016 train_ctcvr_loss=0.040 train_ctr_auc=0.494 train_cvr_auc=0.566 train_ctcvr_auc=0.530\n",
      "train batch click num: 604  buy num: 120\n",
      "train batch click num: 588  buy num: 146\n",
      "train batch click num: 550  buy num: 142\n",
      "train batch click num: 606  buy num: 127\n",
      "train batch click num: 617  buy num: 143\n",
      "train batch click num: 552  buy num: 138\n",
      "train batch click num: 573  buy num: 129\n",
      "train batch click num: 562  buy num: 118\n",
      "train batch click num: 611  buy num: 158\n",
      "train batch click num: 540  buy num: 113\n",
      "10 10 10\n",
      "2018-12-09T19:24:19.717013: Epoch 0 Batch 20/106  train_loss=0.152 train_ctr_loss=0.107 train_cvr_loss=0.014 train_ctcvr_loss=0.031 train_ctr_auc=0.510 train_cvr_auc=0.602 train_ctcvr_auc=0.583\n",
      "train batch click num: 576  buy num: 137\n",
      "train batch click num: 612  buy num: 132\n",
      "train batch click num: 594  buy num: 131\n",
      "train batch click num: 627  buy num: 152\n",
      "train batch click num: 614  buy num: 129\n",
      "train batch click num: 576  buy num: 129\n",
      "train batch click num: 587  buy num: 159\n",
      "train batch click num: 621  buy num: 154\n",
      "train batch click num: 599  buy num: 141\n",
      "train batch click num: 564  buy num: 116\n",
      "10 10 10\n",
      "2018-12-09T19:36:26.466829: Epoch 0 Batch 30/106  train_loss=0.154 train_ctr_loss=0.109 train_cvr_loss=0.014 train_ctcvr_loss=0.031 train_ctr_auc=0.516 train_cvr_auc=0.637 train_ctcvr_auc=0.616\n",
      "train batch click num: 577  buy num: 133\n",
      "train batch click num: 566  buy num: 132\n",
      "train batch click num: 604  buy num: 148\n",
      "train batch click num: 623  buy num: 167\n",
      "train batch click num: 563  buy num: 130\n",
      "train batch click num: 599  buy num: 126\n",
      "train batch click num: 547  buy num: 145\n",
      "train batch click num: 544  buy num: 142\n",
      "train batch click num: 600  buy num: 120\n",
      "train batch click num: 579  buy num: 143\n",
      "10 10 10\n",
      "2018-12-09T19:48:27.895338: Epoch 0 Batch 40/106  train_loss=0.163 train_ctr_loss=0.111 train_cvr_loss=0.015 train_ctcvr_loss=0.036 train_ctr_auc=0.529 train_cvr_auc=0.656 train_ctcvr_auc=0.636\n",
      "train batch click num: 429  buy num: 60\n",
      "train batch click num: 422  buy num: 82\n",
      "train batch click num: 412  buy num: 76\n",
      "train batch click num: 414  buy num: 52\n",
      "train batch click num: 417  buy num: 63\n",
      "train batch click num: 419  buy num: 65\n",
      "train batch click num: 407  buy num: 53\n",
      "train batch click num: 486  buy num: 71\n",
      "train batch click num: 428  buy num: 52\n",
      "train batch click num: 419  buy num: 71\n",
      "10 10 10\n",
      "2018-12-09T20:02:39.331137: Epoch 0 Batch 50/106  train_loss=0.117 train_ctr_loss=0.087 train_cvr_loss=0.009 train_ctcvr_loss=0.021 train_ctr_auc=0.535 train_cvr_auc=0.669 train_ctcvr_auc=0.646\n",
      "train batch click num: 430  buy num: 83\n",
      "train batch click num: 376  buy num: 57\n",
      "train batch click num: 411  buy num: 58\n",
      "train batch click num: 428  buy num: 87\n",
      "train batch click num: 443  buy num: 88\n",
      "train batch click num: 463  buy num: 66\n",
      "train batch click num: 457  buy num: 74\n",
      "train batch click num: 463  buy num: 62\n",
      "train batch click num: 422  buy num: 79\n",
      "train batch click num: 445  buy num: 85\n",
      "10 10 10\n",
      "2018-12-09T20:17:38.275824: Epoch 0 Batch 60/106  train_loss=0.124 train_ctr_loss=0.091 train_cvr_loss=0.010 train_ctcvr_loss=0.024 train_ctr_auc=0.553 train_cvr_auc=0.681 train_ctcvr_auc=0.654\n",
      "train batch click num: 410  buy num: 55\n",
      "train batch click num: 438  buy num: 71\n",
      "train batch click num: 443  buy num: 78\n",
      "train batch click num: 377  buy num: 77\n",
      "train batch click num: 392  buy num: 60\n",
      "train batch click num: 399  buy num: 62\n",
      "train batch click num: 406  buy num: 78\n",
      "train batch click num: 438  buy num: 71\n",
      "train batch click num: 402  buy num: 61\n",
      "train batch click num: 351  buy num: 43\n",
      "10 10 10\n",
      "2018-12-09T20:32:55.114818: Epoch 0 Batch 70/106  train_loss=0.097 train_ctr_loss=0.077 train_cvr_loss=0.006 train_ctcvr_loss=0.014 train_ctr_auc=0.543 train_cvr_auc=0.684 train_ctcvr_auc=0.648\n",
      "train batch click num: 386  buy num: 40\n",
      "train batch click num: 371  buy num: 45\n",
      "train batch click num: 356  buy num: 50\n",
      "train batch click num: 401  buy num: 51\n",
      "train batch click num: 372  buy num: 44\n",
      "train batch click num: 356  buy num: 49\n",
      "train batch click num: 353  buy num: 41\n",
      "train batch click num: 374  buy num: 33\n",
      "train batch click num: 404  buy num: 44\n",
      "train batch click num: 409  buy num: 55\n",
      "10 10 10\n",
      "2018-12-09T20:47:25.741722: Epoch 0 Batch 80/106  train_loss=0.109 train_ctr_loss=0.085 train_cvr_loss=0.007 train_ctcvr_loss=0.017 train_ctr_auc=0.560 train_cvr_auc=0.670 train_ctcvr_auc=0.650\n",
      "train batch click num: 419  buy num: 73\n",
      "train batch click num: 419  buy num: 61\n",
      "train batch click num: 447  buy num: 65\n",
      "train batch click num: 387  buy num: 41\n",
      "train batch click num: 411  buy num: 43\n",
      "train batch click num: 405  buy num: 49\n",
      "train batch click num: 400  buy num: 69\n",
      "train batch click num: 422  buy num: 63\n",
      "train batch click num: 381  buy num: 48\n",
      "train batch click num: 381  buy num: 40\n",
      "10 10 10\n",
      "2018-12-09T21:00:57.627206: Epoch 0 Batch 90/106  train_loss=0.100 train_ctr_loss=0.081 train_cvr_loss=0.006 train_ctcvr_loss=0.013 train_ctr_auc=0.566 train_cvr_auc=0.676 train_ctcvr_auc=0.651\n",
      "train batch click num: 340  buy num: 43\n",
      "train batch click num: 319  buy num: 29\n",
      "train batch click num: 341  buy num: 42\n",
      "train batch click num: 365  buy num: 31\n",
      "train batch click num: 332  buy num: 42\n",
      "train batch click num: 352  buy num: 49\n",
      "train batch click num: 352  buy num: 41\n",
      "train batch click num: 367  buy num: 48\n",
      "train batch click num: 338  buy num: 42\n",
      "train batch click num: 366  buy num: 48\n",
      "10 10 10\n",
      "2018-12-09T21:15:17.235743: Epoch 0 Batch 100/106  train_loss=0.099 train_ctr_loss=0.078 train_cvr_loss=0.006 train_ctcvr_loss=0.015 train_ctr_auc=0.566 train_cvr_auc=0.689 train_ctcvr_auc=0.661\n",
      "train batch click num: 352  buy num: 44\n",
      "train batch click num: 368  buy num: 42\n",
      "train batch click num: 349  buy num: 33\n",
      "train batch click num: 367  buy num: 49\n",
      "train batch click num: 379  buy num: 38\n",
      "test batch click num: 564  buy num: 125\n",
      "test batch click num: 627  buy num: 141\n",
      "test batch click num: 603  buy num: 150\n",
      "test batch click num: 588  buy num: 121\n",
      "test batch click num: 609  buy num: 120\n",
      "test batch click num: 543  buy num: 118\n",
      "test batch click num: 581  buy num: 121\n",
      "test batch click num: 570  buy num: 121\n",
      "test batch click num: 598  buy num: 118\n",
      "test batch click num: 539  buy num: 113\n",
      "test batch click num: 605  buy num: 133\n",
      "11 11 11\n",
      "2018-12-09T21:38:17.006862: Epoch 0 Batch 10/108  test_loss = 0.174 test_ctr_loss = 0.120 test_cvr_loss = 0.016 test_ctcvr_loss = 0.038  test_ctr_auc = 0.538 test_cvr_auc = 0.679 test_ctcvr_auc = 0.632\n",
      "test batch click num: 537  buy num: 125\n",
      "test batch click num: 535  buy num: 130\n",
      "test batch click num: 596  buy num: 132\n",
      "test batch click num: 599  buy num: 140\n",
      "test batch click num: 593  buy num: 128\n",
      "test batch click num: 574  buy num: 125\n",
      "test batch click num: 637  buy num: 137\n",
      "test batch click num: 553  buy num: 134\n",
      "test batch click num: 574  buy num: 114\n",
      "test batch click num: 605  buy num: 145\n",
      "10 10 10\n",
      "2018-12-09T21:51:37.076386: Epoch 0 Batch 20/108  test_loss = 0.177 test_ctr_loss = 0.120 test_cvr_loss = 0.017 test_ctcvr_loss = 0.041  test_ctr_auc = 0.541 test_cvr_auc = 0.668 test_ctcvr_auc = 0.618\n",
      "test batch click num: 557  buy num: 121\n",
      "test batch click num: 609  buy num: 133\n",
      "test batch click num: 615  buy num: 124\n",
      "test batch click num: 573  buy num: 133\n",
      "test batch click num: 552  buy num: 115\n",
      "test batch click num: 593  buy num: 143\n",
      "test batch click num: 602  buy num: 136\n",
      "test batch click num: 592  buy num: 157\n",
      "test batch click num: 566  buy num: 133\n",
      "test batch click num: 606  buy num: 140\n",
      "10 10 10\n",
      "2018-12-09T22:05:26.129167: Epoch 0 Batch 30/108  test_loss = 0.177 test_ctr_loss = 0.120 test_cvr_loss = 0.017 test_ctcvr_loss = 0.041  test_ctr_auc = 0.539 test_cvr_auc = 0.670 test_ctcvr_auc = 0.628\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "test batch click num: 597  buy num: 135\n",
      "test batch click num: 599  buy num: 142\n",
      "test batch click num: 554  buy num: 110\n",
      "test batch click num: 568  buy num: 156\n",
      "test batch click num: 571  buy num: 123\n",
      "test batch click num: 584  buy num: 125\n",
      "test batch click num: 569  buy num: 137\n",
      "test batch click num: 593  buy num: 158\n",
      "test batch click num: 616  buy num: 137\n",
      "test batch click num: 571  buy num: 132\n",
      "10 10 10\n",
      "2018-12-09T22:18:54.466305: Epoch 0 Batch 40/108  test_loss = 0.167 test_ctr_loss = 0.113 test_cvr_loss = 0.015 test_ctcvr_loss = 0.038  test_ctr_auc = 0.546 test_cvr_auc = 0.680 test_ctcvr_auc = 0.633\n",
      "test batch click num: 447  buy num: 62\n",
      "test batch click num: 436  buy num: 58\n",
      "test batch click num: 420  buy num: 76\n",
      "test batch click num: 456  buy num: 68\n",
      "test batch click num: 402  buy num: 59\n",
      "test batch click num: 479  buy num: 78\n",
      "test batch click num: 386  buy num: 52\n",
      "test batch click num: 415  buy num: 68\n",
      "test batch click num: 460  buy num: 71\n",
      "test batch click num: 426  buy num: 79\n",
      "10 10 10\n",
      "2018-12-09T22:32:24.878484: Epoch 0 Batch 50/108  test_loss = 0.122 test_ctr_loss = 0.089 test_cvr_loss = 0.010 test_ctcvr_loss = 0.024  test_ctr_auc = 0.556 test_cvr_auc = 0.698 test_ctcvr_auc = 0.651\n",
      "test batch click num: 475  buy num: 78\n",
      "test batch click num: 449  buy num: 55\n",
      "test batch click num: 444  buy num: 87\n",
      "test batch click num: 451  buy num: 77\n",
      "test batch click num: 445  buy num: 89\n",
      "test batch click num: 441  buy num: 69\n",
      "test batch click num: 434  buy num: 64\n",
      "test batch click num: 446  buy num: 80\n",
      "test batch click num: 432  buy num: 70\n",
      "test batch click num: 431  buy num: 76\n",
      "10 10 10\n",
      "2018-12-09T22:44:47.854137: Epoch 0 Batch 60/108  test_loss = 0.124 test_ctr_loss = 0.090 test_cvr_loss = 0.010 test_ctcvr_loss = 0.023  test_ctr_auc = 0.548 test_cvr_auc = 0.674 test_ctcvr_auc = 0.624\n",
      "test batch click num: 415  buy num: 71\n",
      "test batch click num: 433  buy num: 73\n",
      "test batch click num: 409  buy num: 62\n",
      "test batch click num: 416  buy num: 72\n",
      "test batch click num: 453  buy num: 84\n",
      "test batch click num: 428  buy num: 91\n",
      "test batch click num: 447  buy num: 86\n",
      "test batch click num: 442  buy num: 91\n",
      "test batch click num: 389  buy num: 59\n",
      "test batch click num: 384  buy num: 48\n",
      "10 10 10\n",
      "2018-12-09T22:57:17.699644: Epoch 0 Batch 70/108  test_loss = 0.103 test_ctr_loss = 0.081 test_cvr_loss = 0.007 test_ctcvr_loss = 0.015  test_ctr_auc = 0.552 test_cvr_auc = 0.658 test_ctcvr_auc = 0.605\n",
      "test batch click num: 411  buy num: 57\n",
      "test batch click num: 399  buy num: 61\n",
      "test batch click num: 382  buy num: 51\n",
      "test batch click num: 387  buy num: 55\n",
      "test batch click num: 342  buy num: 46\n",
      "test batch click num: 360  buy num: 61\n",
      "test batch click num: 400  buy num: 53\n",
      "test batch click num: 376  buy num: 54\n",
      "test batch click num: 368  buy num: 46\n",
      "test batch click num: 368  buy num: 54\n",
      "10 10 10\n",
      "2018-12-09T23:10:12.713745: Epoch 0 Batch 80/108  test_loss = 0.103 test_ctr_loss = 0.079 test_cvr_loss = 0.007 test_ctcvr_loss = 0.017  test_ctr_auc = 0.556 test_cvr_auc = 0.673 test_ctcvr_auc = 0.609\n",
      "test batch click num: 403  buy num: 62\n",
      "test batch click num: 430  buy num: 64\n",
      "test batch click num: 428  buy num: 69\n",
      "test batch click num: 382  buy num: 56\n",
      "test batch click num: 381  buy num: 52\n",
      "test batch click num: 409  buy num: 62\n",
      "test batch click num: 411  buy num: 54\n",
      "test batch click num: 376  buy num: 74\n",
      "test batch click num: 404  buy num: 70\n",
      "test batch click num: 419  buy num: 59\n",
      "10 10 10\n",
      "2018-12-09T23:22:40.148678: Epoch 0 Batch 90/108  test_loss = 0.113 test_ctr_loss = 0.087 test_cvr_loss = 0.008 test_ctcvr_loss = 0.018  test_ctr_auc = 0.550 test_cvr_auc = 0.688 test_ctcvr_auc = 0.639\n",
      "test batch click num: 358  buy num: 48\n",
      "test batch click num: 353  buy num: 52\n",
      "test batch click num: 355  buy num: 41\n",
      "test batch click num: 350  buy num: 31\n",
      "test batch click num: 351  buy num: 40\n",
      "test batch click num: 332  buy num: 47\n",
      "test batch click num: 307  buy num: 33\n",
      "test batch click num: 321  buy num: 35\n",
      "test batch click num: 369  buy num: 44\n",
      "test batch click num: 352  buy num: 33\n",
      "10 10 10\n",
      "2018-12-09T23:35:17.849947: Epoch 0 Batch 100/108  test_loss = 0.093 test_ctr_loss = 0.076 test_cvr_loss = 0.006 test_ctcvr_loss = 0.011  test_ctr_auc = 0.553 test_cvr_auc = 0.698 test_ctcvr_auc = 0.638\n",
      "test batch click num: 353  buy num: 41\n",
      "test batch click num: 380  buy num: 45\n",
      "test batch click num: 378  buy num: 37\n",
      "test batch click num: 380  buy num: 35\n",
      "test batch click num: 354  buy num: 47\n",
      "test batch click num: 346  buy num: 50\n",
      "test batch click num: 361  buy num: 51\n",
      "Model Trained and Saved\n"
     ]
    }
   ],
   "source": [
    "%matplotlib inline\n",
    "%config InlineBackend.figure_format = 'retina'\n",
    "import matplotlib.pyplot as plt\n",
    "import time\n",
    "import datetime\n",
    "\n",
    "losses = {'train':[], 'test':[]}\n",
    "\n",
    "ctr_auc_stat = {'train':[], 'test':[]}\n",
    "cvr_auc_stat = {'train':[], 'test':[]}\n",
    "ctcvr_auc_stat = {'train':[], 'test':[]}\n",
    "\n",
    "\n",
    "with tf.Session(graph=train_graph) as sess:\n",
    "    \n",
    "    #搜集数据给tensorBoard用\n",
    "    # Keep track of gradient values and sparsity\n",
    "    grad_summaries = []\n",
    "    for g, v in gradients:\n",
    "        if g is not None:\n",
    "            grad_hist_summary = tf.summary.histogram(\"{}/grad/hist\".format(v.name.replace(':', '_')), g)\n",
    "            sparsity_summary = tf.summary.scalar(\"{}/grad/sparsity\".format(v.name.replace(':', '_')), tf.nn.zero_fraction(g))\n",
    "            grad_summaries.append(grad_hist_summary)\n",
    "            grad_summaries.append(sparsity_summary)\n",
    "    grad_summaries_merged = tf.summary.merge(grad_summaries)\n",
    "        \n",
    "    # Output directory for models and summaries\n",
    "    timestamp = str(int(time.time()))\n",
    "    out_dir = os.path.abspath(os.path.join(os.path.curdir, \"runs\", timestamp))\n",
    "    print(\"Writing to {}\\n\".format(out_dir))\n",
    "     \n",
    "    # Summaries for loss and accuracy\n",
    "    loss_summary = tf.summary.scalar(\"loss\", loss)\n",
    "\n",
    "    # Train Summaries\n",
    "    train_summary_op = tf.summary.merge([loss_summary, grad_summaries_merged])\n",
    "    train_summary_dir = os.path.join(out_dir, \"summaries\", \"train\")\n",
    "    train_summary_writer = tf.summary.FileWriter(train_summary_dir, sess.graph)\n",
    "\n",
    "    # Inference summaries\n",
    "    inference_summary_op = tf.summary.merge([loss_summary])\n",
    "    inference_summary_dir = os.path.join(out_dir, \"summaries\", \"inference\")\n",
    "    inference_summary_writer = tf.summary.FileWriter(inference_summary_dir, sess.graph)\n",
    "\n",
    "    sess.run(tf.global_variables_initializer())\n",
    "    saver = tf.train.Saver()\n",
    "    \n",
    "    #将数据集分成训练集和测试集，随机种子固定，用训练集拆分出来训练和测试，当天随机切分\n",
    "    #train_X,test_X, train_y, test_y = train_test_split(features,  \n",
    "    #                                                    targets_values,  \n",
    "    #                                                   test_size = 0.2,  \n",
    "    #                                                   random_state = 0)  \n",
    "    \n",
    "    # 训练集和测试集用两天的数据，前一天训练，后一天测试\n",
    "    train_X, train_y = train_features, train_targets_values \n",
    "    test_X, test_y = test_features, test_targets_values \n",
    "        \n",
    "\n",
    "    for epoch_i in range(num_epochs):\n",
    "            \n",
    "        train_ctr_auc_arr = []\n",
    "        train_cvr_auc_arr = []\n",
    "        train_ctcvr_auc_arr = []\n",
    "        \n",
    "        test_ctr_auc_arr = []\n",
    "        test_cvr_auc_arr = []\n",
    "        test_ctcvr_auc_arr = []\n",
    "        \n",
    "        train_batches = get_batches(train_X, train_y, batch_size)\n",
    "        test_batches = get_batches(test_X, test_y, test_batch_size)\n",
    "    \n",
    "        \n",
    "        #训练的迭代，保存训练损失\n",
    "        for batch_i in range(len(train_X) // batch_size):\n",
    "            x, y = next(train_batches)\n",
    "\n",
    "            #categories = np.zeros([batch_size, 18])\n",
    "            #for i in range(batch_size):\n",
    "                #categories[i] = x.take(6,1)[i]\n",
    "            item_id = np.zeros([batch_size, 1])\n",
    "            for i in range(batch_size):\n",
    "                item_id[i] = x.take(1,1)[i]\n",
    "                \n",
    "            #User_CateIDs, User_BrandIDs\n",
    "            user_cateids = np.zeros([batch_size, 100])\n",
    "            for i in range(batch_size):\n",
    "                user_cateids[i] = x.take(10,1)[i]\n",
    "            user_brandids = np.zeros([batch_size, 100])\n",
    "            for i in range(batch_size):\n",
    "                user_brandids[i] = x.take(11,1)[i]\n",
    "            \n",
    "            feed = {\n",
    "                UserID : np.reshape(x.take(0,1), [batch_size, 1]),\n",
    "                ItemID: item_id,\n",
    "                User_Cluster : np.reshape(x.take(2,1), [batch_size, 1]),\n",
    "                CategoryID : np.reshape(x.take(3,1), [batch_size, 1]),\n",
    "                ShopID : np.reshape(x.take(4,1), [batch_size, 1]),\n",
    "                BrandID : np.reshape(x.take(5,1), [batch_size, 1]),\n",
    "                Com_CateID : np.reshape(x.take(6,1), [batch_size, 1]),\n",
    "                Com_ShopID : np.reshape(x.take(7,1), [batch_size, 1]),\n",
    "                Com_BrandID : np.reshape(x.take(8,1), [batch_size, 1]),\n",
    "                PID : np.reshape(x.take(9,1), [batch_size, 1]),\n",
    "                User_CateIDs: user_cateids,\n",
    "                User_BrandIDs: user_brandids,\n",
    "                #movie_categories: categories,  #x.take(6,1)\n",
    "                targets: y,\n",
    "                #np.reshape(y, [batch_size, 2]),\n",
    "                lr: learning_rate}\n",
    "\n",
    "            step, train_loss, train_ctr_loss, train_cvr_loss, train_ctcvr_loss, \\\n",
    "                train_ctr_prob, train_cvr_prob, train_ctcvr_prob, \\\n",
    "                train_ctr_label, train_cvr_label, train_ctcvr_label, train_ctr_click,\\\n",
    "                summaries, _ = \\\n",
    "                sess.run([global_step, loss, ctr_loss, cvr_loss, ctcvr_loss, \\\n",
    "                                    ctr_prob, cvr_prob, ctcvr_prob,\n",
    "                                    ctr_label, ctcvr_label, ctcvr_label, ctr_clk, \\\n",
    "                                    train_summary_op, train_op], feed)  #cost\n",
    "            losses['train'].append(train_loss)\n",
    "            train_summary_writer.add_summary(summaries, step)  #\n",
    "            \n",
    "            \n",
    "            print(\"train batch click num:\", len(np.nonzero(y[:,0:1])[0]), \n",
    "                    \" buy num:\", len(np.nonzero(y[:,1:2])[0]))\n",
    "            \n",
    "            ctr_input_arr = np.concatenate((train_ctr_label, train_ctr_prob[:, 1:2]), axis=1)\n",
    "            train_ctr_auc = calc_auc(ctr_input_arr)\n",
    "            if train_ctr_auc > 0:\n",
    "                train_ctr_auc_arr.append(train_ctr_auc)\n",
    "\n",
    "            cvr_input_arr = np.concatenate((train_cvr_label, train_cvr_prob[:, 1:2]), axis=1)\n",
    "            train_cvr_auc = calc_auc_with_filter(cvr_input_arr, train_ctr_click)\n",
    "            if train_cvr_auc > 0:\n",
    "                train_cvr_auc_arr.append(train_cvr_auc)\n",
    "\n",
    "            ctcvr_input_arr = np.concatenate((train_ctcvr_label, train_ctcvr_prob[:, 1:2]), axis=1)\n",
    "            train_ctcvr_auc = calc_auc(ctcvr_input_arr)\n",
    "            if train_ctcvr_auc > 0:\n",
    "                train_ctcvr_auc_arr.append(train_ctcvr_auc)\n",
    "            \n",
    "            \n",
    "            # Show every <show_every_n_batches> batches\n",
    "            if batch_i > 0 and (epoch_i * (len(train_X) // batch_size) + batch_i) % show_every_n_batches == 0:\n",
    "                # 累积 show_every_n_batches 个batch的Train AUC\n",
    "                print (len(train_ctr_auc_arr),len(train_cvr_auc_arr) , len(train_ctcvr_auc_arr))\n",
    "                train_ctr_auc = train_ctr_auc if len(train_ctr_auc_arr) == 0  else sum(train_ctr_auc_arr) / float(len(train_ctr_auc_arr))\n",
    "                train_cvr_auc = train_cvr_auc if len(train_cvr_auc_arr) == 0  else sum(train_cvr_auc_arr) / float(len(train_cvr_auc_arr))\n",
    "                train_ctcvr_auc = train_ctcvr_auc if len(train_ctcvr_auc_arr) == 0  else sum(train_ctcvr_auc_arr) / float(len(train_ctcvr_auc_arr))\n",
    "                # 保存 AUC\n",
    "                ctr_auc_stat['train'].append(train_ctr_auc)\n",
    "                cvr_auc_stat['train'].append(train_cvr_auc)\n",
    "                ctcvr_auc_stat['train'].append(train_ctcvr_auc)\n",
    "                # 清空，并继续累积\n",
    "                train_ctr_auc_arr.clear()\n",
    "                train_cvr_auc_arr.clear()\n",
    "                train_ctcvr_auc_arr.clear()\n",
    "                \n",
    "                time_str = datetime.datetime.now().isoformat()\n",
    "                print('{}: Epoch {} Batch {}/{}  train_loss={:.3f} train_ctr_loss={:.3f} train_cvr_loss={:.3f} train_ctcvr_loss={:.3f} train_ctr_auc={:.3f} train_cvr_auc={:.3f} train_ctcvr_auc={:.3f}'.format(\n",
    "                    time_str,\n",
    "                    epoch_i, \n",
    "                    batch_i,\n",
    "                    (len(train_X) // batch_size),\n",
    "                    train_loss,\n",
    "                    train_ctr_loss,\n",
    "                    train_cvr_loss,\n",
    "                    train_ctcvr_loss,\n",
    "                    train_ctr_auc,\n",
    "                    train_cvr_auc,\n",
    "                    train_ctcvr_auc))\n",
    "                \n",
    "        #使用测试数据的迭代\n",
    "        for batch_i  in range(len(test_X) // test_batch_size):\n",
    "            x, y = next(test_batches)\n",
    "            \n",
    "            #user_id = np.zeros([test_batch_size, 1])\n",
    "            item_id = np.zeros([test_batch_size, 1])\n",
    "            for i in range(test_batch_size):\n",
    "                #user_id[i] = x.take(0,1)[i]\n",
    "                item_id[i] = x.take(1,1)[i]\n",
    "            #User_CateIDs, User_BrandIDs\n",
    "            user_cateids = np.zeros([test_batch_size, 100])\n",
    "            for i in range(batch_size):\n",
    "                user_cateids[i] = x.take(10,1)[i]\n",
    "            user_brandids = np.zeros([test_batch_size, 100])\n",
    "            for i in range(batch_size):\n",
    "                user_brandids[i] = x.take(11,1)[i]\n",
    "            feed = {\n",
    "                UserID : np.reshape(x.take(0,1), [test_batch_size, 1]),\n",
    "                ItemID: item_id,\n",
    "                User_Cluster : np.reshape(x.take(2,1), [test_batch_size, 1]),\n",
    "                CategoryID : np.reshape(x.take(3,1), [test_batch_size, 1]),\n",
    "                ShopID : np.reshape(x.take(4,1), [test_batch_size, 1]),\n",
    "                BrandID : np.reshape(x.take(5,1), [test_batch_size, 1]),\n",
    "                Com_CateID : np.reshape(x.take(6,1), [test_batch_size, 1]),\n",
    "                Com_ShopID : np.reshape(x.take(7,1), [test_batch_size, 1]),\n",
    "                Com_BrandID : np.reshape(x.take(8,1), [test_batch_size, 1]),\n",
    "                PID : np.reshape(x.take(9,1), [test_batch_size, 1]),\n",
    "                User_CateIDs: user_cateids,\n",
    "                User_BrandIDs: user_brandids,\n",
    "                targets: np.reshape(y, [test_batch_size, 2]),\n",
    "                lr: learning_rate}\n",
    "            \n",
    "            step, test_loss, test_ctr_loss, test_cvr_loss, test_ctcvr_loss, \\\n",
    "                test_ctr_prob, test_cvr_prob, test_ctcvr_prob, \\\n",
    "                test_ctr_label, test_cvr_label, test_ctcvr_label, test_ctr_click,\\\n",
    "                 summaries = sess.run([global_step, loss, ctr_loss, cvr_loss, ctcvr_loss, \\\n",
    "                                    ctr_prob, cvr_prob, ctcvr_prob,\n",
    "                                    ctr_label, ctcvr_label, ctcvr_label, ctr_clk, \\\n",
    "                                       inference_summary_op], feed)  #cost\n",
    "\n",
    "            #保存测试损失\n",
    "            losses['test'].append(test_loss)\n",
    "            inference_summary_writer.add_summary(summaries, step)  #\n",
    "            print(\"test batch click num:\", len(np.nonzero(y[:,0:1])[0]), \n",
    "                    \" buy num:\", len(np.nonzero(y[:,1:2])[0]))\n",
    "            \n",
    "            ctr_input_arr = np.concatenate((test_ctr_label, test_ctr_prob[:, 1:2]), axis=1)\n",
    "            test_ctr_auc = calc_auc(ctr_input_arr)\n",
    "            if test_ctr_auc > 0:\n",
    "                test_ctr_auc_arr.append(test_ctr_auc)\n",
    "\n",
    "            cvr_input_arr = np.concatenate((test_cvr_label, test_cvr_prob[:, 1:2]), axis=1)\n",
    "            test_cvr_auc = calc_auc_with_filter(cvr_input_arr, test_ctr_click)\n",
    "            if test_cvr_auc > 0:\n",
    "                test_cvr_auc_arr.append(test_cvr_auc)\n",
    " \n",
    "            ctcvr_input_arr = np.concatenate((test_ctcvr_label, test_ctcvr_prob[:, 1:2]), axis=1)\n",
    "            test_ctcvr_auc = calc_auc(ctcvr_input_arr)\n",
    "            if test_ctcvr_auc > 0:\n",
    "                test_ctcvr_auc_arr.append(test_ctcvr_auc)\n",
    "            \n",
    "            time_str = datetime.datetime.now().isoformat()\n",
    "            if batch_i > 0 and (epoch_i * (len(test_X) // test_batch_size) + batch_i) % show_test_every_n_batches == 0:\n",
    "                \n",
    "                # 累积 show_every_n_batches 个batch的Train AUC\n",
    "                print (len(test_ctr_auc_arr),len(test_cvr_auc_arr) , len(test_ctcvr_auc_arr))\n",
    "                test_ctr_auc = test_ctr_auc if len(test_ctr_auc_arr) == 0  else sum(test_ctr_auc_arr) / float(len(test_ctr_auc_arr))\n",
    "                test_cvr_auc = test_cvr_auc if len(test_cvr_auc_arr) == 0  else sum(test_cvr_auc_arr) / float(len(test_cvr_auc_arr))\n",
    "                test_ctcvr_auc = test_ctcvr_auc if len(test_ctcvr_auc_arr) == 0  else sum(test_ctcvr_auc_arr) / float(len(test_ctcvr_auc_arr))\n",
    "                # 保存 AUC\n",
    "                ctr_auc_stat['test'].append(test_ctr_auc)\n",
    "                cvr_auc_stat['test'].append(test_cvr_auc)\n",
    "                ctcvr_auc_stat['test'].append(test_ctcvr_auc)\n",
    "                # 清空，并继续累积\n",
    "                test_ctr_auc_arr.clear()\n",
    "                test_cvr_auc_arr.clear()\n",
    "                test_ctcvr_auc_arr.clear()\n",
    "                \n",
    "                print('{}: Epoch {} Batch {}/{}  test_loss = {:.3f} test_ctr_loss = {:.3f} test_cvr_loss = {:.3f} test_ctcvr_loss = {:.3f}  test_ctr_auc = {:.3f} test_cvr_auc = {:.3f} test_ctcvr_auc = {:.3f}'.format(\n",
    "                    time_str,\n",
    "                    epoch_i,\n",
    "                    batch_i,\n",
    "                    (len(test_X) // test_batch_size),\n",
    "                    test_loss,\n",
    "                    test_ctr_loss,\n",
    "                    test_cvr_loss,\n",
    "                    test_ctcvr_loss,\n",
    "                    test_ctr_auc,\n",
    "                    test_cvr_auc,\n",
    "                    test_ctcvr_auc))\n",
    "\n",
    "    # Save Model\n",
    "    saver.save(sess, save_dir)  #, global_step=epoch_i\n",
    "    print('Model Trained and Saved')\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 在 TensorBoard 中查看可视化结果"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "tensorboard --logdir /PATH_TO_CODE/runs/1543772895/summaries/\n",
    "\n",
    "<img src=\"assets/esmm_tf_loss.png\"/>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 辅助函数"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "import tensorflow as tf\n",
    "import os\n",
    "import pickle\n",
    "\n",
    "def save_params(params):\n",
    "    \"\"\"\n",
    "    Save parameters to file\n",
    "    \"\"\"\n",
    "    pickle.dump(params, open('./save/params.p', 'wb'))\n",
    "\n",
    "\n",
    "def load_params():\n",
    "    \"\"\"\n",
    "    Load parameters from file\n",
    "    \"\"\"\n",
    "    return pickle.load(open('./save/params.p', mode='rb'))\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 保存参数\n",
    "保存`save_dir` 在生成预测时使用。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "save_params((save_dir))\n",
    "\n",
    "load_dir = load_params()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 显示训练Loss"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "plt.plot(losses['train'], label='Training loss')\n",
    "plt.legend()\n",
    "_ = plt.ylim()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 显示测试Loss\n",
    "迭代次数再增加一些，后面出现严重过拟合的情况"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true,
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "plt.plot(losses['test'], label='Test loss')\n",
    "plt.legend()\n",
    "_ = plt.ylim()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 显示训练CTR AUC"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "plt.plot(ctr_auc_stat['train'], label='Training AUC')\n",
    "plt.legend()\n",
    "_ = plt.ylim()\n",
    "print(ctr_auc_stat['train'])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 显示测试 CTR AUC"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "plt.plot(ctr_auc_stat['test'], label='Test AUC')\n",
    "plt.legend()\n",
    "_ = plt.ylim()\n",
    "print(ctr_auc_stat['test'])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 显示训练CVR AUC"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "plt.plot(cvr_auc_stat['train'], label='Training AUC')\n",
    "plt.legend()\n",
    "_ = plt.ylim()\n",
    "print(cvr_auc_stat['train'])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 显示测试 CVR AUC"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "plt.plot(cvr_auc_stat['test'], label='Test AUC')\n",
    "plt.legend()\n",
    "_ = plt.ylim()\n",
    "print(cvr_auc_stat['test'])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 显示训练CTCVR AUC"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "plt.plot(ctcvr_auc_stat['train'], label='Training AUC')\n",
    "plt.legend()\n",
    "_ = plt.ylim()\n",
    "print(ctcvr_auc_stat['train'])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 显示测试 CTCVR AUC"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "plt.plot(ctcvr_auc_stat['test'], label='Test AUC')\n",
    "plt.legend()\n",
    "_ = plt.ylim()\n",
    "print(ctcvr_auc_stat['test'])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 总结\n",
    "\n",
    "ESMM模型利用用户行为序列数据在完整样本空间建模，避免了传统CVR模型经常遭遇的样本选择偏差和训练数据稀疏的问题，取得了显著的效果。另一方面，ESMM模型首次提出了利用学习CTR和CTCVR的辅助任务迂回学习CVR的思路。ESMM模型中的BASE子网络可以替换为任意的学习模型，因此ESMM的框架可以非常容易地和其他学习模型集成，从而吸收其他学习模型的优势，进一步提升学习效果，想象空间巨大。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 引用\n",
    "[1] Xiao Ma, Liqin Zhao, Guan Huang, Zhi Wang, Zelin Hu, Xiaoqiang Zhu, and Kun Gai. 2018. Entire Space Multi-Task Model: An Effective Approach for Estimating Post-Click Conversion Rate. SIGIR (2018)."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "今天的分享就到这里，请多指教！"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.7"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 1
}
