{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 1 介绍赛题特征工程思路\n",
    "\n",
    "### 1.1、类别型特征的转换\n",
    "\n",
    "类别型特征是指在有限选项内取值的特征，通常为字符串形式。决策树等少数模型能直接处理字符串形式的输入；而逻辑回归、SVM等模型，必须将类别型特征处理成数值型特征后才能正常工作。\n",
    "\n",
    "类别型特征的常用处理方法:\n",
    "\n",
    "1. 序号编码:如成绩可以转换为高、中、低三档，分别用3，2，1表示，转换后依然保留大小关系。\n",
    "2. 独热（one-hot）编码:通常用于处理类别间不具有大小关系的特征，如血型中的A型编码为（1，0，0，0）、B型编码为（0，1，0，0）\n",
    "3. 二进制编码:其和独热编码的思想类似不同之处是二进制编码中允许多位为1。二进制编码的本质是利用二进制对ID进行哈希映射，比独热编码节省空间。\n",
    "\n",
    "\n",
    "\n",
    "原数据集中包括以下特征等已经做了序号编码处理：\n",
    "\n",
    "- 用户性别\n",
    "- 用户年龄\n",
    "- 用户职业\n",
    "- 用户星级\n",
    "\n",
    "\n",
    "\n",
    "使用sklearn.preprocessing.LabelEncoder工具对包括以下特征等进行one-hot编码：\n",
    "\n",
    "- 样本ID\n",
    "- 用户输入关键词时的预测商品种类和属性\n",
    "- 广告商品种类\n",
    "- 广告商品属性\n",
    "\n",
    "\n",
    "\n",
    "### 1.2、用户特征构造\n",
    "\n",
    "#### 1.2.1 用户总体偏好特征\n",
    "\n",
    "- 用户出现次数\n",
    "- 用户总体点击总数\n",
    "- 用户点击广告商品所属商铺总数\n",
    "- 用户点击的广告商品所在页面总数\n",
    "\n",
    "\n",
    "\n",
    "#### 1.2.2 用户行为特征\n",
    "\n",
    "- 用户在每天点击广告商品的次数\n",
    "- 用户在每小时点击广告商品的次数\n",
    "- 用户在每分钟点击广告商品的次数\n",
    "\n",
    "\n",
    "\n",
    "### 1.3、广告商品特征构造\n",
    "\n",
    "#### 1.3.1 广告商品总体特征\n",
    "\n",
    "- 广告商品被用户点击的次数\n",
    "\n",
    "- 广告商品所属类目销量等级数量\n",
    "\n",
    "- 广告商品所属类目价格等级数量\n",
    "\n",
    "- 广告商品销量等级数量\n",
    "\n",
    "- 广告商品收藏等级数量\n",
    "\n",
    "- 广告商品出现次数\n",
    "\n",
    "- 广告商品所属品牌的店铺评分数量\n",
    "\n",
    "- 广告商品所属品牌的店铺星级数量\n",
    "\n",
    "- 广告商品在不同价格等级下的销量等级数量\n",
    "\n",
    "- 广告商品在不同预测类目属性下的销量等级数量\n",
    "\n",
    "- 广告商品在不同收藏次数等级下的商铺数量\n",
    "\n",
    "- 点击广告商品的用户年龄等级数量\n",
    "\n",
    "\n",
    "\n",
    "#### 1.3.2 广告商品比例特征\n",
    "\n",
    "- 广告商品被点击的比例\n",
    "\n",
    "- 广告商品成交比例\n",
    "\n",
    "- 广告商品被点击的次数在所属种类下的广告商品的比例\n",
    "\n",
    "- 广告商品价格等级与所属种类的广告商品平均价格等级的比例\n",
    "\n",
    "- 每个广告商品销量在商品所属种类的平均销量的比例\n",
    "\n",
    "- 广告商品价格占所属种类中商品价格最高值比例\n",
    "\n",
    "- 广告商品所属种类中商品价格最低值与该商品价格的比例\n",
    "\n",
    "- 广告商品销量占所属种类中商品销量最高值比例\n",
    "\n",
    "- 广告商品所属种类中商品销量最低值与该商品销量的比例\n",
    "\n",
    "\n",
    "\n",
    "#### 1.3.3 广告商品地理特征\n",
    "\n",
    "- 广告商品所属类目在不同城市出现的次数\n",
    "\n",
    "- 广告商品在不同城市的展示次数等级数量\n",
    "\n",
    "- 广告商品在不同城市的用户点击次数\n",
    "\n",
    "\n",
    "\n",
    "#### 1.3.4 广告商品平均特征\n",
    "\n",
    "- 广告商品平均价格等级\n",
    "\n",
    "- 不同种类广告商品平均价格等级\n",
    "\n",
    "- 不同种类广告商品不同价格等级下的销量平均值\n",
    "\n",
    "- 不同种类广告商品的销量平均值\n",
    "\n",
    "\n",
    "\n",
    "#### 1.3.5 广告商品其它特征\n",
    "\n",
    "- 广告商品销量与收藏量的差值\n",
    "\n",
    "\n",
    "\n",
    "### 1.4、从时间中提取的特征\n",
    "\n",
    "#### 1.4.1 原始时间特征\n",
    "\n",
    "- 原来的时间戳提取“时”、“分”、“秒”三个时间特征\n",
    "\n",
    "\n",
    "\n",
    "#### 1.4.2 与用户相关的时间特征\n",
    "\n",
    "- 用户点击广告商品行为出现的天数\n",
    "\n",
    "- 每天用户点击广告商品的次数\n",
    "\n",
    "- 每分钟用户点击广告商品的次数\n",
    "\n",
    "- 每小时用户点击广告商品的次数\n",
    "\n",
    "\n",
    "\n",
    "#### 1.4.3 与广告商品相关的时间特征\n",
    "\n",
    "- 每天广告商品被点击的总次数\n",
    "- 每小时广告商品被点击的总次数\n",
    "\n",
    "\n",
    "\n",
    "#### 1.4.4 顺序特征\n",
    "\n",
    "- 每天用户点击广告商品的顺序\n",
    "- 用户点击广告商品的顺序\n",
    "- 用户点击广告商品所属店铺出现顺序\n",
    "\n",
    "\n",
    "\n",
    "#### 1.4.5 按时间比较的统计特征\n",
    "\n",
    "- 广告商品前一天成交数\n",
    "- 广告商品前一天成交用户数\n",
    "- 广告商品前一天被点击的用户数\n",
    "- 广告商品前一天被用户点击的次数\n",
    "- 广告商品今天以前成交数\n",
    "- 广告商品今天以前成交用户数\n",
    "- 广告商品今天以前被点击的用户数\n",
    "- 广告商品今天以前被用户点击的次数"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "selected-grade",
   "metadata": {},
   "source": [
    "## 2 特征工程"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ambient-perth",
   "metadata": {},
   "source": [
    "### 2.0 准备阶段\n",
    "\n",
    "#### 2.0.1 导入必要的包和库"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "elementary-death",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Pandas是一个强大的分析结构化数据的工具集；用于数据挖掘和数据分析，同时也提供数据清洗功能。\n",
    "import pandas as pd\n",
    "# 序列化库\n",
    "import pickle\n",
    "# 预处理库\n",
    "from sklearn import preprocessing\n",
    "# 时间库\n",
    "import datetime\n",
    "\n",
    "# 忽略警告信息\n",
    "import warnings\n",
    "warnings.filterwarnings(\"ignore\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "enormous-leeds",
   "metadata": {},
   "source": [
    "#### 2.0.2 读取合并数据"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "conventional-europe",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "data = pd.read_csv('../../produce/mergeData.csv', sep=' ')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "harmful-layer",
   "metadata": {},
   "source": [
    "### 2.1 构造特征一"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "promising-burner",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "# 添加一些有用特征\n",
    "\n",
    "# item_category_list长度为2-3个;分割的id字符串，这里提取它的第2个种类\n",
    "item_category_list_2 = pd.DataFrame([int(i.split(';')[1]) for i in data.item_category_list])\n",
    "\n",
    "data['item_category_list_2'] = item_category_list_2\n",
    "\n",
    "'''\n",
    "开始添加一些外部特征\n",
    "'''\n",
    "\n",
    "# 增加每个用户在每天、每天的每个时刻的点击广告商品的次数\n",
    "user_query_day = data.groupby(['user_id', 'day']).size().reset_index().rename(columns={0: 'user_query_day'})\n",
    "data = pd.merge(data, user_query_day, 'left', on=['user_id', 'day'])\n",
    "user_query_day_hour = data.groupby(['user_id', 'day', 'hour']).size().reset_index().rename(\n",
    "columns={0: 'user_query_day_hour'})\n",
    "data = pd.merge(data, user_query_day_hour, 'left', on=['user_id', 'day', 'hour'])\n",
    "\n",
    "# 增加每个广告商品被点击的频率\n",
    "item_id_frequence = data.groupby([ 'item_id']).size().reset_index().rename(columns={0: 'item_id_frequence'})\n",
    "item_id_frequence=item_id_frequence/(data.shape[0])\n",
    "data = pd.merge(data, item_id_frequence, 'left', on=['item_id'])\n",
    "\n",
    "# 增加每个用户每天每分钟点击广告商品的次数\n",
    "num_user_minute = data.groupby(['user_id','day','minute']).size().reset_index().rename(columns = {0:'num_user_day_minute'})\n",
    "data = pd.merge(data, num_user_minute,'left',on = ['user_id','day','minute'])\n",
    "\n",
    "# 增加每天每个用户点击每个广告商品的次数\n",
    "day_user_item_id = data.groupby(['day', 'user_id', 'item_id']).size().reset_index().rename(\n",
    "columns={0: 'day_user_item_id'})\n",
    "data = pd.merge(data, day_user_item_id, 'left', on=['day', 'user_id', 'item_id'])\n",
    "\n",
    "# 增加每天每小时每分钟每个用户点击每个广告商品的次数\n",
    "day_hour_minute_user_item_id = data.groupby(\n",
    "['day', 'hour', 'minute', 'user_id', 'item_id']).size().reset_index().rename(\n",
    "columns={0: 'day_hour_minute_user_item_id'})\n",
    "data = pd.merge(data, day_hour_minute_user_item_id, 'left', on=['day', 'hour', 'minute', 'user_id', 'item_id'])\n",
    "\n",
    "# 增加每天每小时每个用户点击每个广告商品的次数\n",
    "number_day_hour_item_id = data.groupby(['day', 'hour', 'item_id']).size().reset_index().rename(\n",
    "columns={0: 'number_day_hour_item_id'})\n",
    "data = pd.merge(data, number_day_hour_item_id, 'left', on=['day', 'hour', 'item_id'])\n",
    "\n",
    "# 增加每个广告商品被每个用户点击的次数\n",
    "item_user_id = data.groupby(['item_id', 'user_id']).size().reset_index().rename(columns={0: 'item_user_id'})\n",
    "data = pd.merge(data, item_user_id, 'left', on=['item_id', 'user_id'])\n",
    "\n",
    "'''\n",
    "开始添加一些新特征\n",
    "'''\n",
    "\n",
    "# 增加每个被点击广告商品类目在每个城市的次数\n",
    "item_category_city_id = data.groupby(['item_category_list', 'item_city_id']).size().reset_index().rename(\n",
    "columns={0: 'item_category_city_id'})\n",
    "data = pd.merge(data, item_category_city_id, 'left', on=['item_category_list', 'item_city_id'])\n",
    "\n",
    "# 增加每个被点击广告商品类目每种销量等级的次数，等级类数字越大程度越大\n",
    "item_category_sales_level = data.groupby(\n",
    "['item_category_list', 'item_sales_level']).size().reset_index().rename(\n",
    "columns={0: 'item_category_sales_level'})\n",
    "data = pd.merge(data, item_category_sales_level, 'left', on=['item_category_list', 'item_sales_level'])\n",
    "\n",
    "# 增加每个被点击广告商品类目每种价格等级的次数\n",
    "item_category_price_level = data.groupby(\n",
    "['item_category_list', 'item_price_level']).size().reset_index().rename(\n",
    "columns={0: 'item_category_price_level'})\n",
    "data = pd.merge(data, item_category_price_level, 'left', on=['item_category_list', 'item_price_level'])\n",
    "\n",
    "# 增加每个被点击广告商品每种销量等级的次数\n",
    "item_ID_sales_level = data.groupby(['item_id', 'item_sales_level']).size().reset_index().rename(\n",
    "columns={0: 'item_ID_sales_level'})\n",
    "data = pd.merge(data, item_ID_sales_level, 'left', on=['item_id', 'item_sales_level'])\n",
    "\n",
    "# 增加每个被点击广告商品每种收藏等级的次数\n",
    "item_ID_collected_level = data.groupby(['item_id', 'item_collected_level']).size().reset_index().rename(\n",
    "columns={0: 'item_ID_collected_level'})\n",
    "data = pd.merge(data, item_ID_collected_level, 'left', on=['item_id', 'item_collected_level'])\n",
    "\n",
    "'''\n",
    "开始添加一些危险特征\n",
    "'''\n",
    "\n",
    "# 增加每个用户出现次数\n",
    "number_user_id = data.groupby(['user_id']).size().reset_index().rename(columns={0: 'number_user_id'})\n",
    "data = pd.merge(data, number_user_id, 'left', on=['user_id'])\n",
    "\n",
    "# 增加每个商品出现次数\n",
    "number_shop_id = data.groupby(['shop_id']).size().reset_index().rename(columns={0: 'number_shop_id'})\n",
    "data = pd.merge(data, number_shop_id, 'left', on=['shop_id'])\n",
    "\n",
    "lbl = preprocessing.LabelEncoder()\n",
    "\n",
    "# 把【预测的种类：属性列表】按照predict_category_property0..4提取成一列，属性从0重新编号，否则是空字符串\n",
    "for i in range(5):\n",
    "    data['predict_category_property' + str(i)] = lbl.fit_transform(data['predict_category_property'].map(\n",
    "        lambda x: str(str(x).split(';')[i]) if len(str(x).split(';')) > i else ''))\n",
    "    \n",
    "# 把【广告商品的类型列表】item_category_list1..2提取成一列，属性从0重新编号，否则是空字符串\n",
    "for i in range(1, 3):\n",
    "    data['item_category_list' + str(i)] = lbl.fit_transform(data['item_category_list'].map(\n",
    "        lambda x: str(str(x).split(';')[i]) if len(str(x).split(';')) > i else '')) \n",
    "        \n",
    "# 把【广告商品的属性列表】item_property_list0..9提取成一列，属性从0重新编号，否则是空字符串\n",
    "for i in range(10):\n",
    "    data['item_property_list' + str(i)] = lbl.fit_transform(data['item_property_list'].map(\n",
    "        lambda x: str(str(x).split(';')[i]) if len(str(x).split(';')) > i else ''))\n",
    "\n",
    "'''\n",
    "对缺失值进行填充处理，都是填充众数\n",
    "'''\n",
    "\n",
    "# 性别填充为0女性\n",
    "data['gender0'] = data['user_gender_id'].apply(lambda x: x + 1 if x == -1 else x)\n",
    "\n",
    "# 年龄填充为1003，年龄范围是1000-1007\n",
    "# print(data['user_age_level'].value_counts())\n",
    "data['age0'] = data['user_age_level'].apply(lambda x: 1003 if x == -1  else x)\n",
    "\n",
    "# 职业填充为2005，职业范围是2002-2005\n",
    "# print(data['user_occupation_id'].value_counts())\n",
    "data['occupation0'] = data['user_occupation_id'].apply(lambda x: 2005 if x == -1  else x)\n",
    "\n",
    "# 星级填充为3006，星级范围是3000-3010\n",
    "# print(data['user_star_level'].value_counts())\n",
    "data['star0'] = data['user_star_level'].apply(lambda x: 3006 if x == -1 else x)\n",
    "\n",
    "'''\n",
    "开始添加一些新特征\n",
    "'''\n",
    "\n",
    "# 增加每个广告商品被每个用户点击的次数\n",
    "number_item_user_id = data.groupby(['item_id','user_id']).size().reset_index().rename(columns={0: 'number_item_user_id'})\n",
    "data = pd.merge(data, number_item_user_id, 'left',on=['item_id','user_id'])\n",
    "\n",
    "# 增加被点击广告商品的品牌的每种店铺评分出现次数\n",
    "number_item_brand_positive_rate = data.groupby(\n",
    "['item_brand_id', 'shop_review_positive_rate']).size().reset_index().rename(\n",
    "columns={0: 'number_item_brand_positive_rate'})\n",
    "data = pd.merge(data, number_item_brand_positive_rate, 'left',\n",
    "on=['item_brand_id', 'shop_review_positive_rate'])\n",
    "\n",
    "# 增加被点击广告商品的品牌的每种店铺星级出现次数\n",
    "number_item_brand_shop_star = data.groupby(['item_brand_id', 'shop_star_level']).size().reset_index().rename(\n",
    "columns={0: 'number_item_brand_shop_star'})\n",
    "data = pd.merge(data, number_item_brand_shop_star, 'left', on=['item_brand_id', 'shop_star_level'])\n",
    "\n",
    "# 增加被点击广告商品的城市的每种被展示次数等级出现的次数\n",
    "number_item_city_pv_level = data.groupby(['item_city_id', 'item_pv_level']).size().reset_index().rename(\n",
    "columns={0: 'number_item_city_pv_level'})\n",
    "data = pd.merge(data, number_item_city_pv_level, 'left', on=['item_city_id', 'item_pv_level'])\n",
    "\n",
    "# 增加被点击广告商品的城市的每个用户的点击次数\n",
    "number_item_city_user_id = data.groupby(['item_city_id', 'user_id']).size().reset_index().rename(\n",
    "columns={0: 'number_item_city_user_id'})\n",
    "data = pd.merge(data, number_item_city_user_id, 'left', on=['item_city_id', 'user_id'])\n",
    "\n",
    "# 增加被点击广告商品在每个价格等级下的每个销量等级出现次数\n",
    "number_item_price_sales_level = data.groupby(\n",
    "['item_price_level', 'item_sales_level']).size().reset_index().rename(\n",
    "columns={0: 'number_item_price_sales_level'})\n",
    "data = pd.merge(data, number_item_price_sales_level, 'left', on=['item_price_level', 'item_sales_level'])\n",
    "\n",
    "# 增加被点击广告商品在每个预测类目属性下的每个销量等级出现次数\n",
    "number_predict_category_sales_level = data.groupby(\n",
    "['predict_category_property', 'item_sales_level']).size().reset_index().rename(\n",
    "columns={0: 'number_predict_category_sales_level'})\n",
    "data = pd.merge(data, number_predict_category_sales_level, 'left',\n",
    "on=['predict_category_property', 'item_sales_level'])\n",
    "\n",
    "# 增加被点击广告商品在每个收藏次数等级下的每个商铺出现次数\n",
    "number_collected_shop_id = data.groupby(['item_collected_level', 'shop_id']).size().reset_index().rename(\n",
    "columns={0: 'number_collected_shop_id'})\n",
    "data = pd.merge(data, number_collected_shop_id, 'left', on=['item_collected_level', 'shop_id'])\n",
    "\n",
    "\n",
    "# 把【广告商品类目列表】按照每个类目提取成一列，否则是空格\n",
    "for i in range(3):\n",
    "    data['category_%d' % (i)] = data['item_category_list'].apply(\n",
    "        lambda x: x.split(\";\")[i] if len(x.split(\";\")) > i else \" \")\n",
    "\n",
    "# 把【广告商品属性列表】按照每个属性提取成一列，否则是空格\n",
    "for i in range(3):\n",
    "    data['property_%d' % (i)] = data['item_property_list'].apply(\n",
    "        lambda x: x.split(\";\")[i] if len(x.split(\";\")) > i else \" \")\n",
    "    \n",
    "# 把【预测的类目：属性列表】按照每个类目取第一个属性提取成一列，否则是空格\n",
    "for i in range(3):\n",
    "    data['predict_category_%d' % (i)] = data['predict_category_property'].apply(\n",
    "        lambda x: str(x.split(\";\")[i]).split(\":\")[0] if len(x.split(\";\")) > i else \" \")\n",
    "        \n",
    "# 增加每个用户对应的点击广告商品、广告所属商铺、点击在第几天发生、广告所在页数的总数\n",
    "# nunique返回不同值个数\n",
    "for i in ['item_id','shop_id','day','context_page_id']:\n",
    "    temp=data.groupby('user_id').nunique()[i].reset_index().rename(columns={i:'number_'+i+'_query_user'})\n",
    "    data=pd.merge(data,temp,'left',on='user_id')\n",
    "\n",
    "\n",
    "# 提取信息\n",
    "basic_data = data[['instance_id']]\n",
    "ad_information = data[\n",
    "        ['item_id', 'item_category_list', 'item_brand_id', 'item_city_id', 'item_price_level','item_property_list',\n",
    "         'item_sales_level', 'item_collected_level', 'item_pv_level']]\n",
    "user_information = data[\n",
    "        ['user_id', 'user_age_level', 'user_star_level', 'user_occupation_id','user_gender_id']]\n",
    "text_information = data[['context_id', 'context_timestamp', 'context_page_id', 'predict_category_property']]\n",
    "shop_information = data[\n",
    "        ['shop_id', 'shop_review_num_level', 'shop_review_positive_rate', 'shop_star_level', 'shop_score_service',\n",
    "         'shop_score_delivery', 'shop_score_description']]\n",
    "external_information = data[\n",
    "        ['time', 'day', 'hour', 'minute', 'user_query_day', 'user_query_day_hour', 'day_user_item_id', \\\n",
    "         'day_hour_minute_user_item_id',\n",
    "         'number_day_hour_item_id', 'number_user_id', 'number_shop_id', \\\n",
    "         'item_category_list_2', 'item_user_id', 'item_category_city_id', 'item_category_sales_level', \\\n",
    "         'item_ID_sales_level', 'item_ID_collected_level', 'item_category_price_level', \\\n",
    "         'predict_category_property0', 'predict_category_property1', 'predict_category_property2', \\\n",
    "         'predict_category_property3', 'predict_category_property4', 'item_category_list1', \\\n",
    "         'item_category_list2', 'item_property_list0', 'item_property_list1', 'item_property_list2', \\\n",
    "         'item_property_list3', 'item_property_list4', 'item_property_list5', 'item_property_list6', \\\n",
    "         'item_property_list7', 'item_property_list8', 'item_property_list9', 'gender0', 'age0', \\\n",
    "         'occupation0', 'star0', 'number_item_brand_positive_rate', 'number_item_brand_shop_star', \\\n",
    "         'number_item_city_pv_level', 'number_item_city_user_id', 'number_item_price_sales_level', \\\n",
    "         'number_predict_category_sales_level', 'number_collected_shop_id'\n",
    "         ]]\n",
    "\n",
    "# 这些信息合并成一个结果\n",
    "result = pd.concat(\n",
    "    [basic_data, ad_information, user_information, text_information, shop_information, external_information],\n",
    "    axis=1)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "material-border",
   "metadata": {},
   "source": [
    "### 2.2 构造特征二"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "# 以下代码作用包括：\n",
    "# 1、增加样本按某个属性分组后每天出现的时间先后顺序特征 \n",
    "# 2、增加统计某个属性在前几天出现次数特征\n",
    "     \n",
    "# 这段代码添加按照每天的每个用户点击广告商品的时间、每个用户点击每个广告商品的时间、每个用户\n",
    "# 点击广告所属每个店铺的时间进行排序的特征\n",
    "for d in range(18, 26):\n",
    "    # 取每一天的样本数据\n",
    "    df1 = result[result['day'] == d]\n",
    "    \n",
    "    # df.rank(method='min')返回从小到大的排名的dataframe，若两个排名相同，取最小排名\n",
    "    # 排名是从1开始的\n",
    "    # 下面的代码按照groupby内容分组，在每组内对所有列进行排序\n",
    "    rnColumn_user = df1.groupby('user_id').rank(method='min')\n",
    "    rnColumn_user_item = df1.groupby(['user_id','item_id']).rank(method='min')\n",
    "    rnColumn_user_shop = df1.groupby(['user_id','shop_id']).rank(method='min')\n",
    "\n",
    "    # 用户出现、用户点击广告商品、用户点击广告所属商铺的时间先后\n",
    "    df1['user_id_order'] = rnColumn_user['context_timestamp']\n",
    "    df1['user_item_id_order'] = rnColumn_user_item['context_timestamp']\n",
    "    df1['user_shop_id_order'] = rnColumn_user_shop['context_timestamp']\n",
    "\n",
    "    # 准备合并的几列属性\n",
    "    df2 = df1[['user_id', 'instance_id', 'item_id', 'user_id_order','user_item_id_order','user_shop_id_order']]\n",
    "    if d == 18:\n",
    "        Df = df2\n",
    "    else:\n",
    "        Df = pd.concat([Df, df2])\n",
    "\n",
    "Df.drop_duplicates(inplace=True)\n",
    "\n",
    "result = pd.merge(result, Df, on=['user_id', 'instance_id', 'item_id'], how='left')\n",
    "\n",
    "# 添加训练集标签\n",
    "filename = '../../produce/serialize_constant'\n",
    "\n",
    "with open(filename, 'rb') as f:  \n",
    "    serialize_constant = pickle.load(f)\n",
    "    trainlabel = serialize_constant['trainlabel']\n",
    "result['is_trade'] = trainlabel"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "# 这段代码增加一些在第n天出现的某个属性，它在第n-1天出现的次数的特征\n",
    "for d in range(18, 26): \n",
    "    df1 = result[result['day'] == d - 1] # 前一天\n",
    "    df2 = result[result['day'] == d]  # 当天\n",
    "\n",
    "    df_cvr = result[(result['day'] == d - 1) & (result['is_trade'] == 1)] # 前一天且已经交易的\n",
    "\n",
    "    # 按照属性分组进行样本数量统计，然后转化为{column -> {index -> value}}的形式\n",
    "    user_item_cnt = df1.groupby(['item_id', 'user_id']).count()['instance_id'].to_dict()\n",
    "\n",
    "    user_cnt = df1.groupby(by='user_id').count()['instance_id'].to_dict()\n",
    "    item_cnt = df1.groupby(by='item_id').count()['instance_id'].to_dict()\n",
    "    shop_cnt = df1.groupby(by='shop_id').count()['instance_id'].to_dict()\n",
    "    item_cvr_cnt = df_cvr.groupby(by='item_id').count()['instance_id'].to_dict()\n",
    "    user_cvr_cnt = df_cvr.groupby(by='user_id').count()['instance_id'].to_dict()\n",
    "\n",
    "    # 统计某个属性在第n天出现时候，它在第n-1天出现的次数\n",
    "    df2['item_cvr_cnt1'] = df2['item_id'].apply(lambda x: item_cvr_cnt.get(x, 0))\n",
    "    df2['user_cvr_cnt1'] = df2['user_id'].apply(lambda x: user_cvr_cnt.get(x, 0))\n",
    "    df2['user_cnt1'] = df2['user_id'].apply(lambda x: user_cnt.get(x, 0))\n",
    "    \n",
    "    # tuple()变元组，axis=1对行进行操作\n",
    "    df2['user_item_cnt1'] = df2[['item_id', 'user_id']].apply(lambda x: user_item_cnt.get(tuple(x), 0), axis=1)\n",
    "    \n",
    "    # 取以下特征进行合并\n",
    "    df2 = df2[['user_item_cnt1', 'user_cnt1', \\\n",
    "       'item_cvr_cnt1', 'user_cvr_cnt1', \\\n",
    "       'item_id', 'user_id', 'instance_id']]\n",
    "    if d == 18:\n",
    "        Df2 = df2\n",
    "    else:\n",
    "        Df2 = pd.concat([df2, Df2])\n",
    "\n",
    "Df2.drop_duplicates(inplace=True)\n",
    "\n",
    "result = pd.merge(result, Df2, on=['instance_id', 'item_id', 'user_id'], how='left')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "# 这段代码增加一些在第n天出现的某个属性，它在第0..n-1天出现的次数的特征\n",
    "for d in range(18, 26):\n",
    "    df1 = result[result['day'] < d] # 前面所有天\n",
    "    df2 = result[result['day'] == d] # 当天\n",
    "\n",
    "    df_cvr = result[(result['day'] < d) & (result['is_trade'] == 1)] # 前面所有天且已经交易的\n",
    "\n",
    "    # 按照属性分组进行样本数量统计，然后转化为{column -> {index -> value}}的形式\n",
    "    user_item_cnt = df1.groupby(['item_id', 'user_id']).count()['instance_id'].to_dict()\n",
    "    user_cnt = df1.groupby(by='user_id').count()['instance_id'].to_dict()\n",
    "    item_cvr_cnt = df_cvr.groupby(by='item_id').count()['instance_id'].to_dict()\n",
    "    user_cvr_cnt = df_cvr.groupby(by='user_id').count()['instance_id'].to_dict()\n",
    "\n",
    "    # 统计某个属性在第n天出现时候，它在第0..n-1天出现的次数\n",
    "    df2['item_cvr_cntx'] = df2['item_id'].apply(lambda x: item_cvr_cnt.get(x, 0))\n",
    "    df2['user_cvr_cntx'] = df2['user_id'].apply(lambda x: user_cvr_cnt.get(x, 0))\n",
    "    df2['user_item_cntx'] = df2[['item_id', 'user_id']].apply(lambda x: user_item_cnt.get(tuple(x), 0), axis=1) # tuple()变元组，axis=1对行进行操作\n",
    "    df2['user_cntx'] = df2['user_id'].apply(lambda x: user_cnt.get(x, 0))\n",
    "\n",
    "    # 取以下特征进行合并\n",
    "    df2 = df2[['user_item_cntx', 'user_cntx',\n",
    "       'item_cvr_cntx', 'user_cvr_cntx', \\\n",
    "       'item_id', 'user_id', 'instance_id']]\n",
    "\n",
    "    if d == 18:\n",
    "        Df2 = df2\n",
    "    else:\n",
    "        Df2 = pd.concat([df2, Df2])\n",
    "\n",
    "Df2.drop_duplicates(inplace=True)\n",
    "\n",
    "result = pd.merge(result, Df2, on=['instance_id', 'item_id', 'user_id'], how='left')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "african-directory",
   "metadata": {},
   "source": [
    "### 2.3 构造特征三"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "upper-meditation",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "# 增加额外特征\n",
    "'''\n",
    "一些额外特征有着很好的性能表现，比如广告商品是店铺某种类中最贵还是最便宜这个特征\n",
    "'''\n",
    "# 增加特征：被展示的广告商品中销量比例\n",
    "result['sales_div_pv'] = result.item_sales_level / (1 + result.item_pv_level)\n",
    "# na_action='ignore'表示如果x是NaN值就忽略掉\n",
    "result['sales_div_pv'] = result.sales_div_pv.map(lambda x: int(10 * x), na_action='ignore')\n",
    "\n",
    "# 增加特征：每天广告商品被点击的总次数\n",
    "number_click_day = result.groupby(['day']).size().reset_index().rename(columns={0:'number_click_day'})\n",
    "result = pd.merge(result,number_click_day,'left',on=['day'])\n",
    "\n",
    "# 增加特征：每小时广告商品被点击的总次数\n",
    "number_click_hour = result.groupby(['hour']).size().reset_index().rename(columns={0:'number_click_hour'})\n",
    "result = pd.merge(result,number_click_hour,'left',on=['hour'])\n",
    "\n",
    "# 增加特征：每个广告商品被用户点击，这些用户年龄的不同值个数，值越大说明广告覆盖的人群更广\n",
    "# nunique返回不同值个数\n",
    "temp = result.groupby('item_id')['user_age_level'].nunique().reset_index().rename(columns={'user_age_level': 'number_' + 'user_age_level' + '_query_item'})\n",
    "result = pd.merge(result, temp, 'left', on=['item_id'])\n",
    "\n",
    "# 增加特征：每个种类的每个广告商品被点击的次数\n",
    "# 注意：item_category_list是按照树形的方式展开的，因此item_category_list_1是最粗的，所以没有进行分析\n",
    "number_category_item = result.groupby(['item_category_list_2','item_id']).size().reset_index().rename(columns={0:'number_category_item'})\n",
    "result = pd.merge(result,number_category_item,'left',on=['item_category_list_2','item_id'])\n",
    "\n",
    "# 增加特征：每个种类的广告商品被点击的次数，以种类为单位\n",
    "number_category2 = result.groupby(['item_category_list_2']).size().reset_index().rename(columns={0:'number_category2'})\n",
    "result = pd.merge(result,number_category2,'left',on=['item_category_list_2'])\n",
    "\n",
    "# 增加特征：每个广告商品被点击的次数在商品所属种类的比例\n",
    "result['prob_item_id_category2'] = result['number_category_item']/result['number_category2']\n",
    "\n",
    "# 扔掉number_category2和number_category_item两个特征\n",
    "result = result.drop(['number_category2','number_category_item'],axis=1)\n",
    "\n",
    "# 增加特征：每个种类的每个广告商品平均价格等级\n",
    "ave_price_category_item = result.groupby(['item_category_list_2','item_id']).mean()['item_price_level'].reset_index().rename(columns={'item_price_level':'ave_price_category_item'})\n",
    "result = pd.merge(result,ave_price_category_item,'left',on=['item_category_list_2','item_id'])\n",
    "\n",
    "# 增加特征：每个种类的商品平均价格等级，以种类为单位\n",
    "ave_price_category = result.groupby(['item_category_list_2']).mean()['item_price_level'].reset_index().rename(columns={'item_price_level':'ave_price_category'})\n",
    "result = pd.merge(result,ave_price_category,'left',on=['item_category_list_2'])\n",
    "\n",
    "# 增加特征：每个广告商品价格在商品所属种类的平均价格的比例\n",
    "result['prob_item_price_to_ave_category2'] = result['item_price_level']/result['ave_price_category']\n",
    "\n",
    "# 增加特征：每个种类的每个广告商品的每种价格下的销量平均值\n",
    "ave_sales_price_category_item = result.groupby(['item_category_list_2','item_id','item_price_level']).mean()['item_sales_level'].reset_index().rename(columns={'item_sales_level':'ave_sales_price_category_item'})\n",
    "result = pd.merge(result,ave_sales_price_category_item,'left',on=['item_category_list_2','item_id','item_price_level'])\n",
    "\n",
    "# 增加特征：每个种类的广告商品的销量平均值，以种类为单位\n",
    "ave_sales_level_category = result.groupby(['item_category_list_2']).mean()['item_sales_level'].reset_index().rename(columns={'item_sales_level':'ave_sales_level_category'})\n",
    "result = pd.merge(result,ave_sales_level_category,'left',on=['item_category_list_2'])\n",
    "\n",
    "# 每个广告商品销量在商品所属种类的平均销量的比例\n",
    "result['prob_ave_category_sales_item_sales'] = result['item_sales_level']/result['ave_sales_level_category']\n",
    "\n",
    "# 增加特征：广告商品所属种类中商品价格最高值\n",
    "max_price_category = result.groupby(['item_category_list_2'])['item_price_level'].max().reset_index().rename(columns={'item_price_level':'max_price_category'})\n",
    "result = pd.merge(result,max_price_category,'left',on=['item_category_list_2'])\n",
    "\n",
    "# 增加特征：广告商品价格占所属种类中商品价格最高值比例，并向下取整\n",
    "result['is_max_price_category'] = result['item_price_level']/result['max_price_category']\n",
    "result['is_max_price_category'] = result['is_max_price_category'].map(lambda x: int(x), na_action='ignore')\n",
    "\n",
    "# 增加特征：广告商品所属种类中商品价格最低值\n",
    "min_price_category = result.groupby(['item_category_list_2'])['item_price_level'].min().reset_index().rename(columns={'item_price_level':'min_price_category'})\n",
    "result = pd.merge(result,min_price_category,'left',on=['item_category_list_2'])\n",
    "\n",
    "# 增加特征：广告商品所属种类中商品价格最低值与该商品价格的比例，并向下取整\n",
    "result['is_min_price_category'] = result['min_price_category']/result['item_price_level']\n",
    "result['is_min_price_category'] = result['is_min_price_category'].map(lambda x: int(x), na_action='ignore')\n",
    "\n",
    "# 扔掉max_price_category和min_price_category两个特征\n",
    "result = result.drop(['max_price_category','min_price_category'],axis=1)\n",
    "\n",
    "# 增加特征：广告商品所属种类中商品销量最高值\n",
    "max_sales_category = result.groupby(['item_category_list_2'])['item_sales_level'].max().reset_index().rename(columns={'item_sales_level':'max_sales_category'})\n",
    "result = pd.merge(result,max_sales_category,'left',on=['item_category_list_2'])\n",
    "\n",
    "# 增加特征：广告商品销量占所属种类中商品销量最高值比例，并向下取整\n",
    "result['is_max_sales_category'] = result['item_sales_level']/result['max_sales_category']\n",
    "result['is_max_sales_category'] = result['is_max_sales_category'].map(lambda x: int(x), na_action='ignore')\n",
    "\n",
    "# 增加特征：广告商品所属种类中商品销量最低值\n",
    "min_sales_category = result.groupby(['item_category_list_2'])['item_sales_level'].min().reset_index().rename(columns={'item_sales_level':'min_sales_category'})\n",
    "result = pd.merge(result,min_sales_category,'left',on=['item_category_list_2'])\n",
    "\n",
    "# 增加特征：广告商品所属种类中商品销量最低值与该商品销量的比例，并向下取整\n",
    "result['is_min_sales_category'] = result['min_sales_category']/result['item_sales_level']\n",
    "result['is_min_sales_category'] = result['is_min_sales_category'].map(lambda x: int(x), na_action='ignore')\n",
    "\n",
    "# 扔掉max_sales_category和min_sales_category两个特征\n",
    "result = result.drop(['max_sales_category', 'min_sales_category'], axis=1)\n",
    "\n",
    "# 增加特征：商品销量 - 商品收藏\n",
    "result['sales_minus_collected'] = result['item_sales_level'] - result['item_collected_level']\n",
    "\n",
    "# 扔掉以下列\n",
    "result = result.drop(\n",
    "    ['item_category_list', 'item_property_list', 'predict_category_property', 'time']\n",
    "    , axis=1)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "republican-obligation",
   "metadata": {},
   "source": [
    "### 2.4 特征保存"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "failing-blanket",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 保存\n",
    "result.to_csv('../../produce/featureData.csv', sep=' ')        "
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3.6.12 64-bit ('tc': conda)",
   "language": "python",
   "name": "python_defaultSpec_1613701977821"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.12-final"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}