{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 1 简介\n",
    "\n",
    "本文参考[链接1](https://github.com/weishao6hao/IJCAI-18)和[链接2](https://tianchi.aliyun.com/forum/postDetail?spm=5176.12586969.1002.24.6d0a48c5EZo7iI&postId=4601)，使用python对[IJCAI-18 阿里妈妈搜索广告转化预测](https://tianchi.aliyun.com/competition/introduction.htm?spm=5176.100066.0.0.7bacd780V3AzWb&raceId=231647)大赛数据进行了探索与分析，以可视化的方式做了一点微小的工作，供大家参考，文中有错误的内容望读者及时指正。\n",
    "\n",
    "搜索广告的转化率，作为衡量广告转化效果的指标，从广告创意、商品品质、商店质量等多个角度综合刻画用户对广告商品的购买意向，即广告商品被用户点击后产生购买行为的概率。本次比赛依托电商CTR数据为基础，旨在通过广告商品信息、用户信息、上下文信息和店铺信息等4类数据，对转化率进行预估以辅助商家决策。\n",
    "\n",
    "本次比赛为参赛选手提供了5类数据（基础数据、广告商品信息、用户信息、上下文信息和店铺信息）如下。基础数据表提供了搜索广告最基本的信息，以及“是否交易”的标记。广告商品信息、用户信息、上下文信息和店铺信息等4类数据，提供了对转化率预估可能有帮助的辅助信息。\n",
    "\n",
    "* 基础数据：各类数据的编号\n",
    "* 广告商品信息：商品的具体信息\n",
    "* 用户信息：用户基本个人信息\n",
    "* 上下文信息：广告展示页面的基本信息\n",
    "\n",
    "用于初赛的数据包含了若干天的样本。最后一天的数据用于结果评测，对选手不公布；其余日期的数据作为训练数据，提供给参赛选手；。\n",
    "\n",
    "在上述各张数据表中，绝大部分样本包含了完整的字段数据，也有少部分样本缺乏特定字段的数据。如果一条样本的某个字段为“-1”，表示这个样本的对应字段缺乏数据。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2 数据探索"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2.0 准备阶段\n",
    "\n",
    "#### 2.0.1 导入必要的包和库"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 提供了对多维数组对象的支持，支持高级大量的维度数组与矩阵运算，也针对数组运算提供大量的数学函数库。\n",
    "import numpy as np\n",
    "# Pandas是一个强大的分析结构化数据的工具集；用于数据挖掘和数据分析，同时也提供数据清洗功能。\n",
    "import pandas as pd\n",
    "# os模块提供了多数操作系统的功能接口函数\n",
    "import os \n",
    "# 处理日期和时间\n",
    "import arrow as ar\n",
    "# 绘图\n",
    "import matplotlib.pyplot as plt\n",
    "# Seaborn是在matplotlib的基础上进行了更高级的API封装,从而使得作图更加容易,\n",
    "import seaborn as sns\n",
    "# 一个优化matplotlib函数操作的package, Matplotlib 中文支持组件\n",
    "from pyplotz.pyplotz import PyplotZ \n",
    "pltz=PyplotZ()\n",
    "# 三种配色的调色板\n",
    "from palettable.colorbrewer.sequential import Blues_9,BuGn_9,Greys_3,PuRd_5\n",
    "# re模块是python独有的匹配字符串的模块,该模块中提供的很多功能是基于正则表达式实现的\n",
    "import re\n",
    "# 格式化日期和时间\n",
    "import time\n",
    "# 进度条库\n",
    "from tqdm import tqdm\n",
    "# 序列化库\n",
    "import pickle\n",
    "# 预处理库\n",
    "from sklearn import preprocessing"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 2.0.2 基本设置"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "# matplotlib支持中文\n",
    "# plt.rcParams['font.sans-serif'] = ['Songti SC']  # 用来正常显示中文标签，字体根据自己电脑情况更改，如Windows可用SimHei\n",
    "plt.rcParams['axes.unicode_minus'] = False  # 用来正常显示负号\n",
    "# 通过警告过滤器进行控制不发出警告消息\n",
    "import warnings\n",
    "warnings.filterwarnings('ignore')\n",
    "# matplotlib中设置样式表\n",
    "plt.style.use('fivethirtyeight')\n",
    "# 直接在python console里面生成图像\n",
    "%matplotlib inline\n",
    "# 目录\n",
    "os.chdir('.')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 2.0.3 读取数据集"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# round1\n",
    "train_path_people_1='../../datasets/cut/round1/round1_train_cut_by_people.txt'\n",
    "train_path_type_1='../../datasets/cut/round1/round1_train_cut_by_type.txt'\n",
    "test_path_a_1='../../datasets/cut/round1/round1_ijcai_18_test_a_20180301.txt'\n",
    "test_path_b_1='../../datasets/cut/round1/round1_ijcai_18_test_b_20180418.txt'\n",
    "# round2\n",
    "train_path_type_2='../../datasets/cut/round2/round2_train_cut_by_type.txt'\n",
    "test_path_a_2='../../datasets/cut/round2/round2_test_a.txt'\n",
    "test_path_b_2='../../datasets/cut/round2/round2_test_b.txt'\n",
    "train=pd.read_table(train_path_type_1,delimiter=' ')\n",
    "test=pd.read_table(test_path_a_1,delimiter=' ')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 2.0.4 分析数据集文件"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "print('训练集有 {} 行，测试集有 {} 行'.format(train.shape[0],test.shape[0]))\n",
    "print('训练集有 {} 列，测试集有 {} 列'.format(train.shape[1],test.shape[1]))\n",
    "print('训练集有 {} 个重复的Instance_id'.format(train.shape[0]-len(train.instance_id.unique())))\n",
    "# intersect1d 函数返回两个数组的共同元素\n",
    "print('训练集与测试集共有 {} 个重复的Instance_id'.format(len(np.intersect1d(train.instance_id.values,test.instance_id.values))))\n",
    "# 训练集是否有缺失值\n",
    "print('训练集没有缺失值') if True not in train.isnull().any().values else print('训练集存在缺失值')\n",
    "# 测试集是否有缺失值\n",
    "print('测试集没有缺失值') if True not in test.isnull().any().values else print('测试集存在缺失值')\n",
    "print(train.head())\n",
    "print(test.head())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2.1 基础数据\n",
    "|字段|解释|\n",
    "| :- | :-| \n",
    "|instance_id|样本编号，Long|\n",
    "|is_trade|是否交易的标记位，Int类型；取值是0或者1，其中1 表示这条样本最终产生交易，0 表示没有交易|\n",
    "|item_id|广告商品编号，Long类型|\n",
    "|user_id|用户的编号，Long类型|\n",
    "|context_id|上下文信息的编号，Long类型|\n",
    "|shop_id|店铺的编号，Long类型|"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "print('训练集一共有'+str(len(train))+'个样本')\n",
    "\n",
    "print('没有交易与交易的比例为'+str(len(train[train.is_trade==0])/len(train[train.is_trade==1])))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "print('数据中有'+str(len(train['item_id'].unique()))+'不用的广告商品,以及'+str(len(train['shop_id'].unique()))+'个不同的商铺')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true,
    "tags": []
   },
   "outputs": [],
   "source": [
    "# 探查下出现频率最高的各类型id\n",
    "for x in ['instance_id','is_trade','item_id','user_id','context_id','shop_id']:\n",
    "    print(train[x].value_counts().head())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "小结\n",
    "* is_trade比例不均匀，大约为52\n",
    "* instance_id有少量重复脏数据\n",
    "* 有大量重复item、shop，商品符合电商长尾分布规律\n",
    "* 一共有3959家店铺，店铺shop_id6597981382309269962出现11278次\n",
    "* 一共有10075个商品，商品item_id7571023501622243456出现3001次\n",
    "* user、context有少量重复出现数据"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 使用饼图,看看样本正负比例\n",
    "f,ax=plt.subplots(1,2,figsize=(14,6))\n",
    "train['is_trade'].value_counts().plot.pie(explode=[0,0.2],autopct='%1.1f%%',ax=ax[0],shadow=True)\n",
    "ax[0].set_title('trade positive-negative ration')\n",
    "ax[0].set_ylabel('')\n",
    "sns.countplot('is_trade',data=train,ax=ax[1])\n",
    "ax[1].set_title('trade positive-negative appear times')\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "fig, axis1 = plt.subplots(1,1,figsize=(10,6))\n",
    "item_num=pd.DataFrame({'item_id_num':train['item_id'].value_counts().values})\n",
    "sns.countplot(x='item_id_num',data=item_num[item_num['item_id_num']<50])\n",
    "axis1.set_xlabel('item appear times')\n",
    "axis1.set_ylabel('the number of commodities that appear n times')\n",
    "axis1.set_title('commodities distribution')\n",
    "\n",
    "\n",
    "fig, axis1 = plt.subplots(1,1,figsize=(10,6))\n",
    "\n",
    "item_value=pd.DataFrame(train.item_id.value_counts()).reset_index().head(20)\n",
    "axis1.set_xlabel('item_id')\n",
    "axis1.set_ylabel('appear times')\n",
    "axis1.set_title('commodities that have top20 appear times')\n",
    "y_pos = np.arange(len(item_value))\n",
    "\n",
    "plt.bar(y_pos, item_value['item_id'], color=(0.2, 0.4, 0.6, 0.6))\n",
    "pltz.xticks(y_pos, item_value['item_id'])\n",
    "pltz.show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "fig, axis1 = plt.subplots(1, 1, figsize=(10, 6))\n",
    "shop_num = pd.DataFrame({'shop_id_num': train['shop_id'].value_counts().values})\n",
    "sns.countplot(x='shop_id_num', data=shop_num[shop_num['shop_id_num'] < 50])\n",
    "axis1.set_xlabel('shop appear times')\n",
    "axis1.set_ylabel('the number of shops that appear n times')\n",
    "axis1.set_title('shop distribution')\n",
    "\n",
    "fig, axis1 = plt.subplots(1, 1, figsize=(10, 6))\n",
    "\n",
    "shop_value = pd.DataFrame(train.shop_id.value_counts()).reset_index().head(20)\n",
    "axis1.set_xlabel('shop_id')\n",
    "axis1.set_ylabel('appear times')\n",
    "axis1.set_title('shops that have top20 appear times')\n",
    "y_pos = np.arange(len(shop_value))\n",
    "\n",
    "plt.bar(y_pos, shop_value['shop_id'], color=(0.2, 0.4, 0.6, 0.6))\n",
    "pltz.xticks(y_pos, shop_value['shop_id'])\n",
    "pltz.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2.2 用户信息\n",
    "\n",
    "|字段|解释|\n",
    "| :- | :-| \n",
    "|user_gender_id|用户的预测性别编号，Int类型；0表示女性用户，1表示男性用户，2表示家庭用户|\n",
    "|user_age_level|用户的预测年龄等级，Int类型；数值越大表示年龄越大|\n",
    "|user_occupation_id|用户的预测职业编号，Int类型|\n",
    "|user_star_level|用户的星级编号，Int类型；数值越大表示用户的星级越高|"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "for x in ['user_gender_id','user_age_level','user_occupation_id','user_star_level']:\n",
    "    print(train[x].value_counts().head(5))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "plt.figure()\n",
    "plt.plot(train.groupby('user_id').mean()['is_trade'], 'o-', label='is_trade rate')\n",
    "plt.xlabel('user_id')\n",
    "plt.ylabel('average is_trade')\n",
    "plt.legend(loc=0)\n",
    "print('There are {} users in train and {} in test'.format(len(train.user_id.unique()),len(test.user_id.unique())))\n",
    "print('There are {} intersect user_id in train and test'.format(len(np.intersect1d(train.item_id.values,test.item_id.values))))\n",
    "print('There are {} woman, {} man, {} family and {} missing value in train'.format(train.loc[train['user_gender_id']==0,:].shape[0],train.loc[train['user_gender_id']==1,:].shape[0],\n",
    "                                                    train.loc[train['user_gender_id'] == 2, :].shape[0],train.loc[train['user_gender_id']==-1,:].shape[0]))\n",
    "print('There are {} woman, {} man, {} family and {} missing value in test'.format(test.loc[test['user_gender_id']==0,:].shape[0],test.loc[test['user_gender_id']==1,:].shape[0],\n",
    "                                                    test.loc[test['user_gender_id'] == 2,:].shape[0],test.loc[test['user_gender_id'] == -1,:].shape[0]))\n",
    "plt.figure()\n",
    "plt.plot(train.groupby('user_gender_id').mean()['is_trade'], 'o-', label='is_trade rate')\n",
    "plt.xlabel('user_gender_id')\n",
    "plt.ylabel('average is_trade')\n",
    "plt.legend(loc=0)\n",
    "plt.figure()\n",
    "plt.hist(train.user_gender_id.values,bins=100)\n",
    "plt.xlabel('user_gender_id')\n",
    "plt.ylabel('number of user')\n",
    "plt.show()\n",
    "\n",
    "plt.figure()\n",
    "plt.plot(train.groupby('user_age_level').mean()['is_trade'], 'o-', label='is_trade rate')\n",
    "plt.xlabel('user_age_level')\n",
    "plt.ylabel('average is_trade')\n",
    "plt.xlim((1000,1010))\n",
    "plt.legend(loc=0)\n",
    "plt.figure()\n",
    "plt.hist(train.user_age_level.values,bins=3000)\n",
    "plt.xlabel('user_age_level')\n",
    "plt.ylabel('number of user')\n",
    "plt.xlim((1000,1010))\n",
    "print('There are {} miss value in user_age_level'.format(len(train.loc[train['user_age_level']==-1,:])))\n",
    "plt.figure()\n",
    "plt.plot(train.groupby('user_occupation_id').mean()['is_trade'], 'o-', label='is_trade rate')\n",
    "plt.xlabel('user_occupation_id')\n",
    "plt.ylabel('average is_trade')\n",
    "plt.xlim((2000,2010))\n",
    "plt.legend(loc=0)\n",
    "plt.figure()\n",
    "plt.hist(train.user_occupation_id.values,bins=3000)\n",
    "plt.xlim((2000,2010))\n",
    "plt.xlabel('user_occupation_id')\n",
    "plt.ylabel('number of user')\n",
    "print('There are {} miss value in user_occupation_id'.format(len(train.loc[train['user_occupation_id']==-1,:])))\n",
    "print('user_occupation_id conclude:{} in train'.format(train.user_occupation_id.unique()))\n",
    "print('user_occupation_id conclude: {} in test'.format(test.user_occupation_id.unique()))\n",
    "plt.figure()\n",
    "plt.plot(train.groupby('user_star_level').mean()['is_trade'], 'o-', label='is_trade rate')\n",
    "plt.xlabel('user_star_level')\n",
    "plt.ylabel('average is_trade')\n",
    "plt.xlim((3000,3010))\n",
    "plt.legend(loc=0)\n",
    "plt.figure()\n",
    "plt.hist(train.user_star_level.values,bins=3000)\n",
    "plt.xlabel('user_star_level')\n",
    "plt.ylabel('number of user')\n",
    "plt.xlim((3000,3010))\n",
    "print('There are {} miss value in user_star_level'.format(len(train.loc[train['user_star_level']==-1,:])))\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": false
   },
   "outputs": [],
   "source": [
    "f,ax=plt.subplots(2,2,figsize=(12,12))\n",
    "train['user_gender_id'].value_counts().plot.pie(autopct='%1.1f%%',ax=ax[0][0],shadow=True,colors=Blues_9.hex_colors)\n",
    "ax[0][0].set_title('user_gender_id')\n",
    "\n",
    "sns.countplot('user_age_level',data=train,ax=ax[0][1])\n",
    "ax[0][1].set_title('user_age_level')\n",
    "\n",
    "sns.countplot('user_occupation_id',data=train,ax=ax[1][0])\n",
    "ax[1][0].set_title('user_occupation_id')\n",
    "\n",
    "train['user_star_level'].value_counts().sort_index().plot.pie(autopct='%1.1f%%',ax=ax[1][1],shadow=True,colors=PuRd_5.hex_colors)\n",
    "ax[1][1].set_title('user_star_level')\n",
    "\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2.3 店铺信息\n",
    "\n",
    "|字段|解释|\n",
    "| :- | :-| \n",
    "|shop_review_num_level|店铺的评价数量等级，Int类型；取值从0开始，数值越大表示评价数量越多|\n",
    "|shop_review_positive_rate|店铺的好评率，Double类型；取值在0到1之间，数值越大表示好评率越高|\n",
    "|shop_star_level|店铺的星级编号，Int类型；取值从0开始，数值越大表示店铺的星级越高|\n",
    "|shop_score_service|店铺的服务态度评分，Double类型；取值在0到1之间，数值越大表示评分越高|\n",
    "|shop_score_delivery|店铺的物流服务评分，Double类型；取值在0到1之间，数值越大表示评分越高|\n",
    "|shop_score_description|店铺的描述相符评分，Double类型；取值在0到1之间，数值越大表示评分越高|"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "for x in ['shop_review_num_level','shop_star_level']:\n",
    "    print(train[x].value_counts())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "plt.figure()\n",
    "plt.plot(train.groupby('shop_id').mean()['is_trade'],'.-',label='is_trade rate')\n",
    "plt.xlabel('shop_id')\n",
    "plt.ylabel('average is_trade')\n",
    "print('There are {} shop_id in test'.format(len(test.shop_id.unique())))\n",
    "print('There are {} shop_id intersection in train and test'.format(len(np.intersect1d(train.shop_id.values,test.shop_id.values))))\n",
    "plt.figure()\n",
    "plt.plot(train.groupby('shop_review_num_level').mean()['is_trade'],'.-',label='is_trade rate')\n",
    "plt.xlabel('shop_review_num_level')\n",
    "plt.ylabel('average is_trade')\n",
    "plt.figure()\n",
    "plt.plot(train.groupby('shop_review_positive_rate').mean()['is_trade'],'.-',label='is_trade rate')\n",
    "plt.xlabel('shop_review_positive_rate')\n",
    "plt.ylabel('average is_trade')\n",
    "plt.xlim((0.8,1))\n",
    "print('There are {} miss shop_review_positive_rate'.format(len(train.loc[train['shop_review_positive_rate']==-1,:])))\n",
    "plt.show()\n",
    "plt.figure()\n",
    "plt.plot(train.groupby('shop_star_level').mean()['is_trade'],'.-',label='is_trade rate')\n",
    "plt.xlim((4999,5020))\n",
    "plt.xlabel('shop_star_level')\n",
    "plt.ylabel('average is_trade')\n",
    "plt.figure()\n",
    "plt.plot(train.groupby('shop_score_service').mean()['is_trade'],'.-',label='is_trade rate')\n",
    "plt.xlim((0.8,1))\n",
    "plt.xlabel('shop_score_service')\n",
    "plt.ylabel('average is_trade')\n",
    "print('There are {} miss shop_score_service'.format(len(train.loc[train['shop_score_service']==-1,:])))\n",
    "plt.figure()\n",
    "plt.plot(train.groupby('shop_score_delivery').mean()['is_trade'],'.-',label='is_trade rate')\n",
    "plt.xlim((0.8,1))\n",
    "plt.xlabel('shop_score_delivery')\n",
    "plt.ylabel('average is_trade')\n",
    "print('There are {} miss shop_score_delivery'.format(len(train.loc[train['shop_score_delivery']==-1,:])))\n",
    "plt.figure()\n",
    "plt.plot(train.groupby('shop_score_description').mean()['is_trade'],'.-',label='is_trade rate')\n",
    "plt.xlim((0.8,1))\n",
    "plt.xlabel('shop_score_description')\n",
    "plt.ylabel('average is_trade')\n",
    "print('There are {} miss shop_score_description'.format(len(train.loc[train['shop_score_description']==-1,:])))\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "f,ax=plt.subplots(2,1,figsize=(12,12))\n",
    "sns.countplot('shop_review_num_level',data=train,ax=ax[0])\n",
    "ax[0].set_title('shop review number level distribution')\n",
    "\n",
    "sns.countplot('shop_star_level',data=train,ax=ax[1])\n",
    "ax[1].set_title('shop star number level distribution')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plt.style.use('ggplot')\n",
    "f,ax=plt.subplots(4,2,figsize=(14,18))\n",
    "plt.tight_layout(5)\n",
    "sns.boxplot(y=train['shop_review_positive_rate'][train['shop_review_positive_rate']!=-1],ax=ax[0][0])\n",
    "sns.distplot(train['shop_review_positive_rate'][train['shop_review_positive_rate']>0.98],ax=ax[0][1])\n",
    "ax[0][1].set_title('shop review positive rate')\n",
    "\n",
    "\n",
    "sns.boxplot(y=train['shop_score_service'][train['shop_score_service']!=-1],ax=ax[1][0])\n",
    "sns.distplot(train['shop_score_service'][train['shop_score_service']>0.9],ax=ax[1][1])\n",
    "ax[1][1].set_title('shop score service score')\n",
    "\n",
    "\n",
    "sns.boxplot(y=train['shop_score_delivery'][train['shop_score_delivery']!=-1],ax=ax[2][0])\n",
    "sns.distplot(train['shop_score_delivery'][train['shop_score_delivery']>0.9],ax=ax[2][1])\n",
    "ax[2][1].set_title('shop score delivery score')\n",
    "\n",
    "\n",
    "sns.boxplot(y=train['shop_score_description'][train['shop_score_description']!=-1],ax=ax[3][0])\n",
    "sns.distplot(train['shop_score_description'][train['shop_score_description']>0.9],ax=ax[3][1])\n",
    "ax[3][1].set_title('shop description score')\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2.4 广告商品信息\n",
    "\n",
    "|字段|解释|\n",
    "| :- | :-| \n",
    "|item_id|广告商品编号，Long类型|\n",
    "|item_category_list|广告商品的的类目列表|\n",
    "|item_property_list|广告商品的属性列表|\n",
    "|item_brand_id|广告商品的品牌编号，Long类型|\n",
    "|item_city_id|广告商品的城市编号，Long类型|\n",
    "|item_price_level|广告商品的价格等级，Int类型；取值从0开始，数值越大表示价格越高|\n",
    "|item_sales_level|广告商品的销量等级，Int类型；取值从0开始，数值越大表示销量越大|\n",
    "|item_collected_level|广告商品被收藏次数的等级，Int类型；取值从0开始，数值越大表示被收藏次数越大|\n",
    "|item_pv_level|广告商品被展示次数的等级，Int类型；取值从0开始，数值越大表示被展示次数越大|"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "for x in ['item_brand_id','item_city_id','item_price_level','item_sales_level','item_collected_level','item_pv_level']:\n",
    "    print(train[x].value_counts())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "plt.figure()\n",
    "plt.hist(train['item_id'].values, bins=100)\n",
    "plt.xlabel('item_id')\n",
    "plt.ylabel('number if item')\n",
    "plt.show()\n",
    "print('There are {} items in train and {} in test'.format(len(train.item_id.unique()),len(test.item_id.unique())))\n",
    "print('There are {} intersect in train and test'.format(len(np.intersect1d(train.item_id.values,test.item_id.values))))\n",
    "train_item_category_list_1=pd.DataFrame([int(i.split(';')[0]) for i in train.item_category_list])\n",
    "train_item_category_list_2=pd.DataFrame([int(i.split(';')[1]) for i in train.item_category_list])\n",
    "train_item_category_list_3=pd.DataFrame([int(i.split(';')[2]) for i in train.item_category_list if len(i.split(';'))==3])\n",
    "test_item_category_list_1=pd.DataFrame([int(i.split(';')[0]) for i in test.item_category_list])\n",
    "test_item_category_list_2=pd.DataFrame([int(i.split(';')[1]) for i in test.item_category_list])\n",
    "test_item_category_list_3=pd.DataFrame([int(i.split(';')[2]) for i in test.item_category_list if len(i.split(';'))==3])\n",
    "\n",
    "print('There are {} item_category_list_1 categories'.format(len(train_item_category_list_1[0].unique())))\n",
    "print('There are {} item_category_list_2 categories'.format(len(train_item_category_list_2[0].unique())))\n",
    "print('There are {} item_category_list_3 categories'.format(len(train_item_category_list_3[0].unique())))\n",
    "print('There are {} item_category_list_3 in train and {} in test'.format(len(train_item_category_list_3[0]),len(test_item_category_list_3[0])))\n",
    "\n",
    "train_item_property_list=pd.DataFrame([int(len(i.split(';'))) for i in train.item_property_list])\n",
    "test_item_property_list=pd.DataFrame([int(len(i.split(';'))) for i in test.item_property_list])\n",
    "plt.figure()\n",
    "plt.plot(train_item_property_list[0], '.-', label='train')\n",
    "plt.plot(test_item_property_list[0], '.-', label='test')\n",
    "plt.title('Number of item property in train and test .')\n",
    "plt.legend(loc=0)\n",
    "plt.ylabel('number of property')\n",
    "plt.show()\n",
    "# train_item_property_list_2=pd.DataFrame([int(i.split(';')[1]) for i in train.item_property_list])\n",
    "# train_item_property_list_3=pd.DataFrame([int(i.split(';')[2]) for i in train.item_property_list if len(i.split(';'))==3])\n",
    "# test_item_property_list_1=pd.DataFrame([int(i.split(';')[0]) for i in test.item_property_list])\n",
    "# test_item_property_list_2=pd.DataFrame([int(i.split(';')[1]) for i in test.item_property_list])\n",
    "# test_item_property_list_3=pd.DataFrame([int(i.split(';')[2]) for i in test.item_property_list if len(i.split(';'))==3])\n",
    "plt.figure()\n",
    "plt.plot(train.groupby('item_brand_id').mean()['is_trade'], 'o-', label='is_trade rate')\n",
    "plt.xlabel('item_brand_id')\n",
    "plt.ylabel('average is_trade')\n",
    "plt.legend(loc=0)\n",
    "print('There are {} item_brand_id'.format(len(train.item_brand_id.unique())))\n",
    "plt.figure()\n",
    "plt.plot(train.groupby('item_city_id').mean()['is_trade'], 'o-', label='is_trade rate')\n",
    "plt.xlabel('item_city_id')\n",
    "plt.ylabel('average is_trade')\n",
    "plt.legend(loc=0)\n",
    "print('There are {} item_city_id'.format(len(train.item_city_id.unique())))\n",
    "#list(train.item_city_id.values).count(train.groupby('item_city_id').mean()['is_trade'].index[53])\n",
    "plt.figure()\n",
    "plt.plot(train.groupby('item_price_level').mean()['is_trade'], 'o-', label='is_trade rate')\n",
    "plt.xlabel('item_price_level')\n",
    "plt.ylabel('average is_trade')\n",
    "plt.legend(loc=0)\n",
    "plt.figure()\n",
    "plt.hist(train.item_price_level.values,bins=100)\n",
    "plt.xlabel('item_price_level')\n",
    "plt.ylabel('number of item_price_level')\n",
    "plt.figure()\n",
    "plt.plot(train.groupby('item_sales_level').mean()['is_trade'], 'o-', label='is_trade rate')\n",
    "plt.xlabel('item_sales_level')\n",
    "plt.ylabel('average is_trade')\n",
    "plt.legend(loc=0)\n",
    "plt.show()\n",
    "\n",
    "plt.figure()\n",
    "plt.plot(train.groupby('item_collected_level').mean()['is_trade'], 'o-', label='is_trade rate')\n",
    "plt.xlabel('item_collected_level')\n",
    "plt.ylabel('average is_trade')\n",
    "plt.legend(loc=0)\n",
    "plt.figure()\n",
    "plt.plot(train.groupby('item_pv_level').mean()['is_trade'], 'o-', label='is_trade rate')\n",
    "plt.xlabel('item_pv_level')\n",
    "plt.ylabel('average is_trade')\n",
    "plt.legend(loc=0)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "f,ax=plt.subplots(2,2,figsize=(20,20))\n",
    "\n",
    "item_brand_id_num=pd.DataFrame({'brand_id_num':train['item_brand_id'].value_counts()}).reset_index()\n",
    "brand_value=pd.DataFrame({'brand_id_num':item_brand_id_num['brand_id_num'][item_brand_id_num['brand_id_num']<4000].sum()},index=[0])\n",
    "brand_value['index']='below_4000'\n",
    "brand_value=pd.concat([brand_value,item_brand_id_num[item_brand_id_num['brand_id_num']>=4000]])\n",
    "pd.Series(data=brand_value.set_index('index')['brand_id_num']).plot.pie(autopct='%1.1f%%',ax=ax[0][0],shadow=True,colors=Blues_9.hex_colors)\n",
    "ax[0][0].set_title('item_brand_id')\n",
    "ax[0][0].legend(fontsize=7.5)\n",
    "#sns.countplot('item_city_id',data=train,ax=ax[0][1])\n",
    "\n",
    "\n",
    "item_city_id_num=pd.DataFrame({'city_id_num':train['item_brand_id'].value_counts()}).reset_index()\n",
    "city_value=pd.DataFrame({'city_id_num':item_city_id_num['city_id_num'][item_city_id_num['city_id_num']<3000].sum()},index=[0])\n",
    "city_value['index']='below_3000'\n",
    "city_value=pd.concat([city_value,item_city_id_num[item_brand_id_num['brand_id_num']>=3000]])\n",
    "pd.Series(data=city_value.set_index('index')['city_id_num']).plot.pie(autopct='%1.1f%%',ax=ax[0][1],shadow=True,colors=Blues_9.hex_colors)\n",
    "ax[0][1].set_title('item_city_id')\n",
    "\n",
    "sns.countplot('item_sales_level',data=train,ax=ax[1][0])\n",
    "ax[1][0].set_title('item_sales_level')\n",
    "\n",
    "train['item_collected_level'].value_counts().plot.pie(autopct='%1.1f%%',ax=ax[1][1],shadow=True,colors=PuRd_5.hex_colors)\n",
    "ax[1][1].set_title('item_collected_level')\n",
    "ax[1][1].legend(fontsize=7.5)\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plt.figure() # \n",
    "plt.hist(train['is_trade'].values, bins=100)\n",
    "plt.xlabel('trade information')\n",
    "plt.ylabel('number of trade')\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2.5 上下文信息"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "|字段|解释|\n",
    "|:-|:-|\n",
    "|context_id|上下文信息的编号，Long类型|\n",
    "|context_timestamp|广告商品的展示时间，Long类型;取值是以秒为单位的Unix时间戳，以1天为单位对时间戳进行了偏移|\n",
    "|context_page_id|广告商品的展示⻚面编号，Int类型;取值从1开始，依次增加;在一次搜索的展示结果中第一屏的编号为1，第二 屏的编号为2|\n",
    "|predict_category_property|根据查询词预测的类目属性列表，String类型;数据拼接格式为“category_A:property_A_1,property_A_2,property_A_3;category_B:-1;category_C:property_C_1,property_C_2” ，其中 category_A、category_B、category_C 是预测的三个类目;property_B 取值为-1，表示预测的第二个类 目 category_B 没有对应的预测属性|"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "for x in ['context_id','context_timestamp','context_page_id','predict_category_property']:\n",
    "    print(train[x].value_counts())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "print('There are {} context_id intersection in train and test'.format(len(np.intersect1d(train.context_id.values,test.context_id.values))))\n",
    "plt.figure()\n",
    "plt.plot(train.groupby('context_timestamp').mean()['is_trade'], '.', label='is_trade rate')\n",
    "plt.xlabel('contaxt_timestamp')\n",
    "plt.ylabel('average is_trade')\n",
    "plt.figure()\n",
    "plt.plot(train.groupby('context_page_id').mean()['is_trade'], '.-', label='is_trade rate')\n",
    "plt.xlabel('context_page_id')\n",
    "plt.ylabel('average is_trade')\n",
    "print('context_page_id start is {} and end is {} in train'.format(train.context_page_id.min(),train.context_page_id.max()))\n",
    "print('context_page_id start is {} and end is {} in test'.format(test.context_page_id.min(),test.context_page_id.max()))\n",
    "\n",
    "train['num_predict_category_property']=[sum(map(lambda i:0 if i[-2:]=='-1' else 1,re.split(',|;',j))) for j in train.predict_category_property]\n",
    "plt.figure()\n",
    "plt.plot(train.groupby('num_predict_category_property').mean()['is_trade'],'.-',label='is_trade rate')\n",
    "plt.xlabel('num_predict_category_property')\n",
    "plt.ylabel('average is_trade')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 3 数据预处理"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3.1 读取数据集"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "# round1\n",
    "train_path_people_1='../../datasets/cut/round1/round1_train_cut_by_people.txt'\n",
    "train_path_type_1='../../datasets/cut/round1/round1_train_cut_by_type.txt'\n",
    "test_path_a_1='../../datasets/cut/round1/round1_ijcai_18_test_a_20180301.txt'\n",
    "test_path_b_1='../../datasets/cut/round1/round1_ijcai_18_test_b_20180418.txt'\n",
    "# round2\n",
    "train_path_type_2='../../datasets/cut/round2/round2_train_cut_by_type.txt'\n",
    "test_path_a_2='../../datasets/cut/round2/round2_test_a.txt'\n",
    "test_path_b_2='../../datasets/cut/round2/round2_test_b.txt'\n",
    "train=pd.read_table(train_path_type_1,delimiter=' ')\n",
    "test=pd.read_table(test_path_a_1,delimiter=' ')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3.2 持久化保存一些数据集特征，后面备用"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 去除完全相同的行，inplace=True为原地更改\n",
    "train.drop_duplicates(inplace=True)\n",
    "test.drop_duplicates(inplace=True)\n",
    "\n",
    "# train_ad_data 的长度\n",
    "trainLen = len(train)\n",
    "# 训练集是否交易的标签\n",
    "trainlabel = train['is_trade']\n",
    "# 测试集的样本ID\n",
    "testInstanceID = test['instance_id']\n",
    "\n",
    "# 使用字典方便存储\n",
    "serialize_constant = {}\n",
    "serialize_constant['trainLen'] = trainLen\n",
    "serialize_constant['trainlabel'] = trainlabel\n",
    "serialize_constant['testInstanceID'] = testInstanceID\n",
    "\n",
    "# 使用pickle序列化\n",
    "filename = '../../produce/serialize_constant'\n",
    "\n",
    "with open(filename, 'wb') as f:\n",
    "    pickle.dump(serialize_constant, f)\n",
    "\n",
    "# with open(filename, 'rb') as f:  \n",
    "#     serialize_constant = pickle.load(f)\n",
    "#     trainLen = serialize_constant['trainLen']\n",
    "#     trainlabel = serialize_constant['trainlabel']\n",
    "#     testInstanceID = serialize_constant['testInstanceID']"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3.3 数据预处理"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "# 将训练集和测试集合并在一起\n",
    "key = list(test) # test 的列名\n",
    "mergeData = pd.concat([train, test], keys=key)\n",
    "# 样本重新编号\n",
    "mergeData = mergeData.reset_index(drop=True)\n",
    "\n",
    "# 将timestamp转换成datetime【%Y-%m-%d %H:%M:%S】\n",
    "def timestamp_datetime(value):\n",
    "    format = '%Y-%m-%d %H:%M:%S'\n",
    "    value = time.localtime(value)\n",
    "    dt = time.strftime(format, value)\n",
    "    return dt # str\n",
    "    \n",
    "# 时间，datetime64[ns]\n",
    "mergeData['time'] = pd.to_datetime(mergeData.context_timestamp.apply(timestamp_datetime))\n",
    "\n",
    "# 初赛是用18-24号来预测25号\n",
    "# 复赛是用8/31-9/6全天和9/7上午来预测9/7下午的数据\n",
    "\n",
    "mergeData['day'] = mergeData.time.dt.day\n",
    "mergeData['hour'] = mergeData.time.dt.hour\n",
    "mergeData['minute'] = mergeData.time.dt.minute\n",
    "\n",
    "# ID从0重新编号\n",
    "lbl = preprocessing.LabelEncoder()\n",
    "for col in ['item_id', 'item_brand_id', 'item_city_id', 'shop_id', 'user_id']:\n",
    "    mergeData[col] = lbl.fit_transform(mergeData[col])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3.4 保存为中间文件"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 保存为csv文件\n",
    "mergeData.to_csv('../../produce/mergeData.csv', sep=' ')"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3.6.12 64-bit ('tc': conda)",
   "language": "python",
   "name": "python_defaultSpec_1613737428126"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.12-final"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}