{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "训练集共包括5个城市，每个城市目录下的数据集总体说明：\n",
    "\n",
    "1.各区域每天新增感染人数。文件名：infection.csv。提供前45天每天数据，文件格式为日期,区域ID,新增感染人数；“,”分割。\n",
    "\n",
    "2.城市间迁徙指数。文件名：migration.csv。提供45天每天数据。文件格式为迁徙日期,迁徙出发城市,迁徙到达城市,迁徙指数；“,”分割。\n",
    "\n",
    "3.网格人流量指数。文件名：density.csv。提供45天内每周两天抽样数据，文件格式为日期，小时，网格中心点经度,网格中心点纬度,人流量指数；“,”分割。\n",
    "\n",
    "4.网格关联强度。文件名：transfer.csv。城市内网格间关联强度数据，文件格式为小时，出发网格中心点经度，出发网格中心点纬度，到达网格中心点经度，到达网格中心点纬度，迁移强度；“,”分割。\n",
    "\n",
    "5.网格归属区域。文件名：grid_attr.csv。城市内网格对应的归属区域ID，文件格式为网格中心点经度，网格中心点纬度，归属区域ID；“,”分割。\n",
    "\n",
    "6.天气数据。文件名：weather.csv。提供45天每天数据，文件格式为日期,小时,气温,湿度,风向,风速,风力,天气；“,”分割。\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Baseline方法\n",
    "## 数据处理+训练数据构造\n",
    "首先城市之间是不能独立处理的，因为多个城市之间具有相互迁移的动作，具体字段在migration里面。\n",
    "如果用ML/DL的方法来回归预测新增感染人数，那么我们的训练集的每个样本应当具有下列属性：\n",
    "- 当前日期，当前区域新增感染人数：这个作为训练的标签\n",
    "- 城市\n",
    "- 日期\n",
    "- 区域ID\n",
    "- 过去N个日期(N可以定义为潜伏期长度)其余城市迁移到该城市的人口，即输入人口(来自于**migration**数据)以及“其余城市”过去的感染数\n",
    "- 前一个日期（或者前N个）的新增感染人数（**infection数据**）\n",
    "- 前面N个日期的天气指标（利用到**weather**数据）\n",
    "- 过去N个日期该区域的人流量密度(利用到**density**数据)\n",
    "- 区域之间的迁移强度（**transfer**数据）以及和本区域有迁移关系的其它区域的感染情况\n",
    "\n",
    "**注**：\n",
    "\n",
    "- 因为density,transfer的地理信息表示都是以经纬度的方式来表示而并不是直接以区域id的方式给出，所以这里就需要用到**grid_attr**数据来将经纬度信息与区域ID信息联系起来。\n",
    "- 不同表之间的数据大都可以通过每张表的**日期**数据进行关联。\n",
    "\n",
    "以上就是一个数据处理基本思路，以及所给6个数据文件的作用浅析。利用python的pandas库可以很好的实现以上思路，主要的方法是利用\n",
    "数据表之间的连接，利用共同字段对表进行连接操作！\n",
    "\n",
    "### pandas具体操作\n",
    "- 11个城市，每个城市6张表，总共30张表。需要不同表之间交互的其实只有migration表\n",
    "- 表与表之间的连接需要日期和ID\n",
    "- 经纬度转区域ID的操作不太好搞，数据量太大,尝试了先groupby数据，\n",
    "\n",
    "\n",
    "## 回归预测方法（scikit-learn库）\n",
    "### xgboost&lightgbm&catboost\n",
    "### nn"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 读取5个城市的训练数据\n",
    "Solution()这个类的的作用是读取训练数据，总共5个城市，按照pandas,dataframe的格式读出来，并且赋予列名称。然后所有的数据都存储在一个字典中\n",
    "\n",
    "self.dic_data={\n",
    "\n",
    "'city_A':{'transfer', 'density', 'migration', 'infection', 'weather', 'grid_attr'}, \n",
    "\n",
    "'city_B':{transfer', 'density', 'migration', 'infection', 'weather', 'grid_attr''}, \n",
    "\n",
    "'city_C':{transfer', 'density', 'migration', 'infection', 'weather', 'grid_attr''},\n",
    "\n",
    "'city_D':{transfer', 'density', 'migration', 'infection', 'weather', 'grid_attr''},\n",
    "\n",
    "'city_E':{transfer', 'density', 'migration', 'infection', 'weather', 'grid_attr''}\n",
    "\n",
    "}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "import numpy as np\n",
    "import pandas as pd\n",
    "import pickle"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "\n",
    "class Solotion(object):\n",
    "    def __init__(self,train_data_dir):\n",
    "        self.train_data_dir=train_data_dir\n",
    "        self.dic_data={}\n",
    "    #\n",
    "    def read_train_data(self):\n",
    "        city_names=sorted(os.listdir(self.train_data_dir))\n",
    "        for city_name in city_names:\n",
    "            if city_name[-4:]=='.csv':\n",
    "                continue\n",
    "            print('开始读取%s的数据.........:'% city_name)\n",
    "            city_path=os.path.join(self.train_data_dir,city_name)#分别读取5个城市\n",
    "            self.read_perCity(city_name,city_path)\n",
    "            print('%s的数据读取完毕.........:'% city_name)\n",
    "    #\n",
    "    def read_perCity(self,city_name,city_path):\n",
    "        #city_ID=city_name[-1]  #A,B,C....\n",
    "        data={}\n",
    "        transfer=pd.read_csv(os.path.join(city_path,'transfer.csv'),header=None)\n",
    "        transfer.columns = ['hour', 'start_grid_x','start_grid_y','end_grid_x','end_grid_y','index']\n",
    "        #\n",
    "        density=pd.read_csv(os.path.join(city_path,'density.csv'),header=None)\n",
    "        density.columns = ['date', 'hour','grid_x','grid_y','index']\n",
    "        #\n",
    "        migration=pd.read_csv(os.path.join(city_path,'migration.csv'),header=None)\n",
    "        migration.columns = ['date', 'departure_city','arrival_city','index']\n",
    "        #\n",
    "        infection=pd.read_csv(os.path.join(city_path,'infection.csv'),header=None)\n",
    "        infection.columns = ['city','region_id','date', 'index']\n",
    "        #\n",
    "        weather=pd.read_csv(os.path.join(city_path,'weather.csv'),header=None)\n",
    "        weather.columns = ['data','hour', 'temperature','humidity','wind_direction','wind_speed','wind_force','weather']\n",
    "        #\n",
    "        grid_attr=pd.read_csv(os.path.join(city_path,'grid_attr.csv'),header=None)\n",
    "        grid_attr.columns = ['grid_x', 'grid_y','region_id']\n",
    "        #加入字典data\n",
    "        data['transfer']=transfer\n",
    "        data['density']=density\n",
    "        data['migration']=migration\n",
    "        data['infection']=infection\n",
    "        data['weather']=weather\n",
    "        data['grid_attr']=grid_attr\n",
    "        #汇总到总字典\n",
    "        self.dic_data[city_name]=data\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "开始读取city_A的数据.........:\n",
      "city_A的数据读取完毕.........:\n",
      "开始读取city_B的数据.........:\n",
      "city_B的数据读取完毕.........:\n",
      "开始读取city_C的数据.........:\n",
      "city_C的数据读取完毕.........:\n",
      "开始读取city_D的数据.........:\n",
      "city_D的数据读取完毕.........:\n",
      "开始读取city_E的数据.........:\n",
      "city_E的数据读取完毕.........:\n",
      "开始读取city_F的数据.........:\n",
      "city_F的数据读取完毕.........:\n",
      "开始读取city_G的数据.........:\n",
      "city_G的数据读取完毕.........:\n",
      "开始读取city_H的数据.........:\n",
      "city_H的数据读取完毕.........:\n",
      "开始读取city_I的数据.........:\n",
      "city_I的数据读取完毕.........:\n",
      "开始读取city_J的数据.........:\n",
      "city_J的数据读取完毕.........:\n",
      "开始读取city_K的数据.........:\n",
      "city_K的数据读取完毕.........:\n"
     ]
    }
   ],
   "source": [
    "s=Solotion('E:/迅雷下载/train_data_semi_final_round/')#\n",
    "s.read_train_data()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(dict_keys(['city_A', 'city_B', 'city_C', 'city_D', 'city_E', 'city_F', 'city_G', 'city_H', 'city_I', 'city_J', 'city_K']),\n",
       " dict_keys(['transfer', 'density', 'migration', 'infection', 'weather', 'grid_attr']))"
      ]
     },
     "execution_count": 10,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "train_data=s.dic_data #train-data存储了所有的数据\n",
    "train_data.keys(),train_data['city_A'].keys()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>hour</th>\n",
       "      <th>start_grid_x</th>\n",
       "      <th>start_grid_y</th>\n",
       "      <th>end_grid_x</th>\n",
       "      <th>end_grid_y</th>\n",
       "      <th>index</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>0</td>\n",
       "      <td>145.453431</td>\n",
       "      <td>29.461634</td>\n",
       "      <td>145.45706</td>\n",
       "      <td>29.477293</td>\n",
       "      <td>0.1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>0</td>\n",
       "      <td>145.453431</td>\n",
       "      <td>29.461634</td>\n",
       "      <td>145.45706</td>\n",
       "      <td>29.477293</td>\n",
       "      <td>0.3</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "   hour  start_grid_x  start_grid_y  end_grid_x  end_grid_y  index\n",
       "0     0    145.453431     29.461634   145.45706   29.477293    0.1\n",
       "1     0    145.453431     29.461634   145.45706   29.477293    0.3"
      ]
     },
     "execution_count": 11,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "train_data['city_A']['transfer'].head(2)#验证一下输出"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\"\\npkl_file = open('train_data.pkl', 'rb')\\n\\ntrain_data = pickle.load(pkl_file)\\n\""
      ]
     },
     "execution_count": 12,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "#序列化存储和读取数据\n",
    "'''\n",
    "import pickle\n",
    "output_pkl = open('/home/aistudio/work/train_data.pkl', 'wb')\n",
    "# Pickle dictionary using protocol 0.\n",
    "pickle.dump(train_data, output_pkl)\n",
    "'''\n",
    "pkl_file = open('/home/aistudio/work/train_data.pkl', 'rb')\n",
    "\n",
    "train_data = pickle.load(pkl_file)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 数据预处理"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "'''\n",
    "#处理数据时，可以以infection表为中心\n",
    "#处理city_A的数据\n",
    "city_A_data=train_data['city_A']\n",
    "transfer=city_A_data['transfer']\n",
    "density=city_A_data['density']\n",
    "migration=city_A_data['density']\n",
    "infection=city_A_data['infection']\n",
    "weather=city_A_data['weather']\n",
    "grid_attr=city_A_data['grid_attr']\n",
    "print(density.date.unique())#只有12天的密度信息\n",
    "print(len(grid_attr.grid_x.unique()),len(grid_attr.grid_y.unique()))\n",
    "print(len(density.grid_x.unique()),len(density.grid_y.unique()))\n",
    "print(len(transfer.start_grid_x.unique()),len(transfer.start_grid_y.unique()),len(transfer.end_grid_x.unique()),len(transfer.end_grid_y.unique()))\n",
    "'''"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "从以上三个结果可以看出，经纬度乱七八糟，还是需要先转换成ID才好进行表之间的连接\n",
    "\n",
    "为了对不同的表做连接，感觉需要先将经度纬度坐标转换为区域ID，具体的分配准则可以采取最近邻法则，给定一个经纬度的坐标，\n",
    "\n",
    "只需要将这个坐标分别和区域id的中心坐标求距离，最近的距离分配给该经纬度"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "\n",
    "# #下面代码的主要作用在于处理density和tranfer数据:将坐标都从属于区域id\n",
    "# for city_id in ['A','B','C','D','E','F','G','H','I','J','K']:\n",
    "#     #处理数据时，可以以infection表为中心\n",
    "#     #处理city_A的数据\n",
    "#     city_data=train_data['city_'+city_id]\n",
    "#     transfer=city_data['transfer']\n",
    "#     density=city_data['density']\n",
    "#     migration=city_data['migration']\n",
    "#     infection=city_data['infection']\n",
    "#     weather=city_data['weather']\n",
    "#     grid_attr=city_data['grid_attr']\n",
    "#     #譬如对于cityA总共118个区域ID,找到每一个id的中心\n",
    "#     grouped_id=grid_attr.groupby(['region_id'])\n",
    "#     region_center=grouped_id['grid_x','grid_y'].agg(np.mean).reset_index()\n",
    "#     #----------------将经纬度转换成对应的区域ID的函数--------------------\n",
    "#     #平均处理一条数据要\n",
    "#     def xy_to_id(x,y):\n",
    "#         distance={}\n",
    "#         for index,Id,x0,y0 in region_center.itertuples():\n",
    "#             distance[index]=(x-x0)*(x-x0)+(y-y0)*(y-y0)\n",
    "#         distance=sorted(distance.items(),key=lambda item:item[1]) #按照距离排序，取最近的index进行匹配\n",
    "#         return distance[0][0] #返回该经纬度对应的区域Id\n",
    "#     #----------------处理density(35906205)数据--------------------\n",
    "#     '''\n",
    "#     含有经,纬度数据的表(density和transfer)全部换成区域ID数据，并且把经度，纬度drop掉,\n",
    "#     对于density和transfer数据,如果不对原始数据进行处理，直接转换id速度太慢无法接受，\n",
    "#     感觉可以把小时信息给消除掉，这样可以极大的消减数据量,譬如可以尝试按照经纬度groupby然后取1整天的平均密度\n",
    "#     '''\n",
    "#     #----------------对density的12个日期按照经纬度groupby，然后取一天的平均密度--------------------\n",
    "#     grouped=density.groupby(['date'])#这样会group成12组数据\n",
    "#     #[21200506, 21200509, 21200513, 21200516, 21200520, 21200523,21200527, 21200530, 21200603, 21200606, 21200610, 21200613]\n",
    "#     dates=density.date.unique()#得到这12组数据的日期编号：21200506,.....,21200613\n",
    "#     density_dic={}#把density切分成12个部分，分别处理存放\n",
    "#     for per in dates:\n",
    "#         print(\"开始处理日期%d的数据\"%per)\n",
    "#         d=grouped.get_group(per) #per:21200506.....,共12天\n",
    "#         d_xy=d.groupby(['grid_x','grid_y']) #按照经纬度进行group\n",
    "#         mean_index=d_xy['index'].agg(np.mean).reset_index()\n",
    "#         density_dic[per]=mean_index\n",
    "#     print(\"density数据的groupby完成.............\")\n",
    "#     #print(density_dic[21200506].head())#可以打印看一下21200506这天的数据\n",
    "\n",
    "\n",
    "#     #----------------依次处理12天的数据:将经纬度替换为区域ID(...)--------------------\n",
    "\n",
    "#     #即使经过了groupyby，每一天还是会有大约19w条经纬度组合，但是12天的数据大都是重复的，所以\n",
    "#     menmory_dic={} #为了节约时间，计算过的经纬度不再重复计算，而是用一个字典来实现记忆\n",
    "#     for per in dates:\n",
    "#         print(\"开始为日期%d的数据转换ID\"%per)\n",
    "#         temp=density_dic[per][['grid_x','grid_y']]\n",
    "#         re_list=[]\n",
    "#         for index,x,y in temp.itertuples():\n",
    "#             str_xy=str(x)+str(y)\n",
    "#             if str_xy in menmory_dic:#判断是不是已经计算过这个经纬度组合，如果计算过，直接从字典中取\n",
    "#                 re_list.append(menmory_dic[str_xy])\n",
    "#                 continue\n",
    "#             re_id=xy_to_id(x,y)#该经纬度对应的区域ID\n",
    "#             re_list.append(re_id)\n",
    "#             menmory_dic[str_xy]=re_id\n",
    "#         density_dic[per]['region_id']=re_list #把转换后的id列添加到dataframe中\n",
    "#         density_dic[per].drop(['grid_x','grid_y'],axis=1,inplace=True)\n",
    "#         grouped_density=density_dic[per].groupby(['region_id'])\n",
    "#         density_dic[per]=grouped_density['index'].agg(np.mean).reset_index()\n",
    "\n",
    "#     #每一个density_dic[per]里面都存储了一个dataframe代表了per这一天各个区域的density,还可以对它按照区域ID做一个groupby\n",
    "#     #--------------------为了不用每次都跑，将结果存入pickle中---------------------\n",
    "\n",
    "#     import pickle\n",
    "#     output_pkl = open('dataset/density_'+city_id+'.pkl', 'wb')\n",
    "#     # Pickle dictionary using protocol 0.\n",
    "#     pickle.dump(density_dic, output_pkl)\n",
    "\n",
    "#     '''\n",
    "#     pkl_file = open('/home/aistudio/work/density_A.pkl', 'rb')\n",
    "    \n",
    "#     density = pickle.load(pkl_file)\n",
    "#     '''\n",
    "#     #----------------处理transfer(5670548)（...）数据--------------------\n",
    "#     grouped_trans=transfer.groupby(['start_grid_x','start_grid_y','end_grid_x','end_grid_y'])\n",
    "#     mean_index_trans=grouped_trans['index'].agg(np.mean).reset_index()\n",
    "#     start_list=[]\n",
    "#     end_list=[]\n",
    "#     for i,sx,sy,ex,ey,_ in mean_index_trans.itertuples():\n",
    "#         if i%500000==0:#打印处理进度\n",
    "#             print(i)\n",
    "#         start_xy=str(sx)+str(sy)\n",
    "#         end_xy=str(ex)+str(ey)\n",
    "#         if start_xy in menmory_dic :#判断是不是已经计算过这个经纬度组合，如果计算过，直接从字典中取\n",
    "#             start_list.append(menmory_dic[start_xy])\n",
    "#         else:\n",
    "#             start_id=xy_to_id(sx,sy)#该经纬度对应的区域ID\n",
    "#             start_list.append(start_id)\n",
    "#             menmory_dic[start_xy]=start_id\n",
    "#         if end_xy in menmory_dic:\n",
    "#                 end_list.append(menmory_dic[end_xy])\n",
    "#         else:\n",
    "#             end_id=xy_to_id(ex,ey)#该经纬度对应的区域ID\n",
    "#             end_list.append(end_id)\n",
    "#             menmory_dic[end_xy]=end_id\n",
    "#     mean_index_trans['start_id']=start_list\n",
    "#     mean_index_trans['end_id']=end_list\n",
    "#     mean_index_trans.drop(['start_grid_x','start_grid_y','end_grid_x','end_grid_y'],axis=1,inplace=True)\n",
    "#     grouped_tranId=mean_index_trans.groupby(['start_id','end_id'])\n",
    "#     write_tranId=grouped_tranId['index'].agg(np.sum).reset_index()\n",
    "#     write_tranId.to_csv('dataset/transferId_'+city_id+'.csv',index=False)\n",
    "#     #transferId_A里面存储了城市A各个区域内部的连接强度和外部的连接强度\n",
    "\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 构造特征\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 构造density特征\n",
    "不能单纯的pd.merge(),因为desnsity是和日期有关的，对于特定的一个样本，需要先找在该样本日期之前的density抽样的那个日期。然后将\n",
    "那个日期对应的该样本的区域的平均density返回\n",
    "### 构造transfer特征\n",
    "对于某一个区域id：有内部的tranfer也有id之间的transfer,所以这里可以构造两个特征,例如对于id为0的区域:\n",
    "- 那么(0,0)的transfer总和代表了它内部的强度\n",
    "- (1,0)+(2,0)+....+(i,0)+....代表了外部流向0的强度\n",
    "### 构造weather特征，\n",
    "构造的weather特征需要是当前日期前N天的数据，N可以定义为潜伏期；预处理weather数据，去掉小时信息，以天为单位"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "----处理city_A的数据\n",
      "----处理city_B的数据\n",
      "----处理city_C的数据\n",
      "----处理city_D的数据\n",
      "----处理city_E的数据\n",
      "----处理city_F的数据\n",
      "----处理city_G的数据\n",
      "----处理city_H的数据\n",
      "----处理city_I的数据\n",
      "----处理city_J的数据\n",
      "----处理city_K的数据\n"
     ]
    }
   ],
   "source": [
    "#------------------把migration数据合并一下--------------\n",
    "migration_A=train_data['city_A']['migration']\n",
    "migration_B=train_data['city_B']['migration']\n",
    "migration_C=train_data['city_C']['migration']\n",
    "migration_D=train_data['city_D']['migration']\n",
    "migration_E=train_data['city_E']['migration']\n",
    "migration_F=train_data['city_F']['migration']\n",
    "migration_G=train_data['city_G']['migration']\n",
    "migration_H=train_data['city_H']['migration']\n",
    "migration_I=train_data['city_I']['migration']\n",
    "migration_J=train_data['city_J']['migration']\n",
    "migration_K=train_data['city_K']['migration']\n",
    "migration_merge=pd.concat([migration_A, migration_B,migration_C, migration_D,migration_E,\n",
    "                           migration_F, migration_G,migration_H, migration_I,migration_J,migration_K]).reset_index(drop=True)\n",
    "\n",
    "potential=12 #潜伏期\n",
    "for city_id in ['A','B','C','D','E','F','G','H','I','J','K']:\n",
    "    print(\"----处理city_%s的数据\"%city_id)\n",
    "    city_data = train_data['city_' + city_id]\n",
    "    transfer = city_data['transfer']\n",
    "    density = city_data['density']\n",
    "    migration = city_data['migration']\n",
    "    infection = city_data['infection']\n",
    "    weather = city_data['weather']\n",
    "    grid_attr = city_data['grid_attr']\n",
    "    #----------第一步是要整合infection和grid_attr的信息------------\n",
    "    df_train=pd.merge(infection, grid_attr, left_on=['region_id'], right_on=['region_id'], how='left')\n",
    "    df_train.drop(['grid_x','grid_y'],axis=1,inplace=True)\n",
    "    df_train.drop_duplicates(keep='first',inplace=True)\n",
    "    df_train.reset_index(drop=True,inplace=True)\n",
    "    #df_train.head(2)\n",
    "\n",
    "    #------------------处理weather特征，构造的weather特征需要是当前日期前N天的数据，N可以定义为潜伏期--------------\n",
    "    #----------预处理weather数据，去掉小时信息，以天为单位-------------\n",
    "    #返回每个object一天出现最多的情况\n",
    "    def func(df):\n",
    "        tmp=df.tolist()\n",
    "        return max(tmp,key=tmp.count)\n",
    "    weather=weather.fillna(0)#首先填充缺省值\n",
    "    #对天气按照data进行groupby,将hour信息去掉,这里的date(日期)拼错为了data,但不影响，以后再改了\n",
    "    wea_grouped=weather.groupby(['data'])\n",
    "    wea_temp=wea_grouped['temperature'].agg(np.mean).reset_index()\n",
    "    wea_humi=wea_grouped['humidity'].apply(func).to_frame().reset_index()\n",
    "    wea_dire=wea_grouped['wind_direction'].apply(func).to_frame().reset_index()\n",
    "    wea_speed=wea_grouped['wind_speed'].apply(func).to_frame().reset_index()\n",
    "    wea_force=wea_grouped['wind_force'].apply(func).to_frame().reset_index()\n",
    "    wea_type=wea_grouped['weather'].apply(func).to_frame().reset_index()\n",
    "\n",
    "    #---------------构造density特征和transfer特征------------------\n",
    "    date_list=[21200506, 21200509, 21200513, 21200516, 21200520, 21200523,\n",
    "    21200527, 21200530, 21200603, 21200606, 21200610, 21200613]\n",
    "    df_trans=pd.read_csv('dataset/transferId_'+city_id+'.csv')#存储了区域内部和外部的transfer特征\n",
    "    #构造density特征：找到当前样本日期之前的density抽样日期\n",
    "    pkl_file = open('dataset/density_'+city_id+'.pkl', 'rb')\n",
    "    density_dic = pickle.load(pkl_file)\n",
    "    def get_densityFeature(date):\n",
    "        for i in range(12):\n",
    "            if date_list[i]>date:\n",
    "                return date_list[i-1]\n",
    "        return date_list[11]\n",
    "    #构造transfer特征\n",
    "    def get_transFeature(region_id):\n",
    "        iner=0#内部强度\n",
    "        ext=0#外部强度\n",
    "        for _,start_id,end_id,index in df_trans.itertuples():\n",
    "            if end_id==region_id and start_id==region_id:\n",
    "                iner=iner+index\n",
    "            if end_id==region_id:\n",
    "                ext=ext+index\n",
    "        return iner,ext\n",
    "\n",
    "    #构造weather特征\n",
    "    def get_weatherFeature(date):\n",
    "        date_list=weather['data'].unique()#45天\n",
    "        for i in range(len(date_list)):\n",
    "            if date_list[i]>date:\n",
    "                break\n",
    "        if i<potential:\n",
    "            tmp_date=[date_list[j] for j in range(i)]\n",
    "        else:\n",
    "            tmp_date=[date_list[j] for j in range(i-potential,i)]#取潜伏期这么长度的时间段计算天气状况\n",
    "\n",
    "        temper_list=[]#温度\n",
    "        humi_list=[]#湿度\n",
    "        dire_list=[]#风向\n",
    "        speed_list=[]#风速\n",
    "        force_list=[]#风力\n",
    "        wtype_list=[]#类型\n",
    "        for per_d in tmp_date:\n",
    "            temper_list.append(wea_temp[wea_temp['data']==per_d]['temperature'].reset_index(drop=True)[0])\n",
    "            humi_list.append(wea_humi[wea_humi['data']==per_d]['humidity'].reset_index(drop=True)[0])\n",
    "            dire_list.append(wea_dire[wea_dire['data']==per_d]['wind_direction'].reset_index(drop=True)[0])\n",
    "            speed_list.append(wea_speed[wea_speed['data']==per_d]['wind_speed'].reset_index(drop=True)[0])\n",
    "            force_list.append(wea_force[wea_force['data']==per_d]['wind_force'].reset_index(drop=True)[0])\n",
    "            wtype_list.append(wea_type[wea_type['data']==per_d]['weather'].reset_index(drop=True)[0])\n",
    "        #前N天出现最多的天气种类\n",
    "        re1=np.mean(temper_list)#温度取平均值，而不是众数\n",
    "        re2=max(humi_list,key=humi_list.count)\n",
    "        re3=max(dire_list,key=dire_list.count)\n",
    "        re4=max(speed_list,key=speed_list.count)\n",
    "        re5=max(force_list,key=force_list.count)\n",
    "        re6=max(wtype_list,key=wtype_list.count)\n",
    "        return  re1,re2,re3,re4,re5,re6\n",
    "\n",
    "    #构造migration特征\n",
    "    #譬如对于城市A来说，先计算出a的当前日期的前N天内迁移到A的迁移指数总和\n",
    "    def get_migrateFeature(date,city_name):\n",
    "        date_list=weather['data'].unique()#45天\n",
    "        for i in range(len(date_list)):\n",
    "            if date_list[i]>date:\n",
    "                break\n",
    "        if i<potential:\n",
    "            tmp_date=[date_list[j] for j in range(i)]\n",
    "        else:\n",
    "            tmp_date=[date_list[j] for j in range(i-potential,i)]#取潜伏期这么长度的时间段计算城市间迁移状况\n",
    "        re=0\n",
    "        for _,per_date,dep,arr,value in migration_merge.itertuples():\n",
    "            if arr==city_name and (per_date in tmp_date):\n",
    "                re+=value\n",
    "        return re\n",
    "    #\n",
    "    density_value=[]#密度\n",
    "    iner_transfer=[]#内部强度\n",
    "    ext_transfer=[]#外部强度\n",
    "    temper=[]#温度\n",
    "    humi=[]#湿度\n",
    "    dire=[]#风向\n",
    "    speed=[]#风速\n",
    "    force=[]#风力\n",
    "    wtype=[]#类型\n",
    "    mig_list=[]#城市之间迁移特征\n",
    "    for _,_,region_id,date,_ in df_train.itertuples():\n",
    "        #构造density特征\n",
    "        loc_date=get_densityFeature(date)\n",
    "        density_value.append(density_dic[loc_date]['index'][region_id])\n",
    "        #构造transfer特征\n",
    "        iner,ext=get_transFeature(region_id)\n",
    "        iner_transfer.append(iner)\n",
    "        ext_transfer.append(ext)\n",
    "        #构造weather特征\n",
    "        re1,re2,re3,re4,re5,re6=get_weatherFeature(date)\n",
    "        temper.append(re1)\n",
    "        humi.append(re2)\n",
    "        dire.append(re3)\n",
    "        speed.append(re4)\n",
    "        force.append(re5)\n",
    "        wtype.append(re6)\n",
    "        #构造migration 特征\n",
    "        mig_list.append(get_migrateFeature(date,city_id))\n",
    "\n",
    "    df_train['density']=density_value\n",
    "    df_train['iner_transfer']=iner_transfer\n",
    "    df_train['ext_transfer']=ext_transfer\n",
    "    df_train['temperature']=temper\n",
    "    df_train['humidity']=humi\n",
    "    df_train['wind_direction']=dire\n",
    "    df_train['wind_speed']=speed\n",
    "    df_train['wind_force']=force\n",
    "    df_train['weather_type']=wtype\n",
    "    df_train['migration']=mig_list\n",
    "    #df_train.head(2)\n",
    "    df_train.to_csv('dataset/features/'+'features_'+city_id+'.csv',index=False)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 1
}
